Developers spend the majority of their time either debugging existing code or writing new bugs into new code. Except for all the time we spend in meetings talking about code of course.
I'm half joking, but there is some truth to this. After 25+ years writing code professionally, I've given up looking for magic bullets that solve all the problems. Today's AI ain't gonna help you here folks.
Instead, I've focused on small tactical changes that seem to improve developer productivity.
One technique we've been using recently I call "devdata".
I'm going to cover the problem and then a high-level explanation of why devdata works better and finish off with some deeper details to be successful if you choose to try this technique.
One of the biggest challenges to debugging is setting up your data exactly like the scenario where the bug happened. It's the same with writing tests. We have to set the stage before asserting our code does the right thing.
One way to do this is to snapshot your production database(s), pull it local to your laptop, and start debugging. This can work and is often the first thing developers reach for. It sure is more manageable than hand-crafting the exact situation.
This technique isn't optimal if your data is sensitive. Now you have to spend some time writing code to sanitize your data. Did you do it right and obfuscate it enough? Who knows. I don't and you probably don't either because maybe Susan added a new Social Security Number field since the last time anyone updated the sanitization code.
Can we trust this sub-contractor to have the sanitized version? To be safe maybe he can try and work without it?
And it falls apart when your production database is 2TB. Even if will fit on your laptop, transferring it there means you get to start debugging in a couple hours.
Another way to tackle the problem is to not use a copy of production and instead craft a specific test and fixture data to closely mimic the problem. Safer, but this specific fixture of data isn't likely applicable to any other bug or feature.
A Better Way
Developers are the largest expense with software. Improving their productivity doesn't just make sense to the company's bottom line, but it improves morale as they no longer have to wade through a bunch of crap to get started on the job at hand.
The suggestion I'm proposing is to invest some time in a realistic but fake
data generation tool. We call it
devdata but the name isn't at all important.
This tool should:
- Dynamically generate common scenarios in your application in your local development environment
- It has to be able to reset to known states quickly and easily
- It should keep some names and credentials consistent
- It should be easy to extend as your application changes over time
- And it has to be used by the vast majority of your team
At this point I think it's easier to start talking in terms of how this works using a real application.
The Example App: Pizza
Everybody is familiar with pizza, so we're going to talk about a SaaS product that does pizzaria management.
In software development we often talk about User Stories and Personas. What I'm essentially suggesting here is that you automate bringing some of these story scenarios and personas to life.
Pizzarias come in all shapes and sizes, but what I would start with is a few specific scenarios.
Scenario 1: Lombardi's
Lombardi's is your typical small, single-location, local pizza place. There is one owner, Gennaro. Who is also the only manager, but has 7 employees. They only offer two crusts in three sizes, have regular toppings, and don't do delivery.
Scenario 2: Kansas Pizza Kitchen
The lesser known cousin of the California variety. KPK has 6 locations in 4 cities. There are three owners, 8 managers, 12 supervisors, 41 employees, and they do delivery in 4 of their 6 locations. Along with pizza they also have a few pasta dishes and need a fairly complicated intake form for their occasional catering gig.
Scenario 3: ACME Pizza
ACME is a huge publicly traded pizza empire. Lots of locations across many states, several layers of management, and more employees than you think. Complicated in every way possible. Uses all the features of your app.
They're so large you're certain they're going to stop using your app and build their own in-house any day now.
So what does this do for us?
First, this gives us some named scenarios we can talk about.
"I see how this is useful for Lombardi's, but how is the UI going to work for KPK and ACME users?"
"So I think I found a weird sales tax bug that doesn't happen for Lombardi's, but does when you order a delivery with KPK."
Here at REVSYS we mostly work with Django, so our
devdata tool is built as a
Django management command. Because ACME is a big beast with lots of data it
takes awhile to generate all the fake pizza orders so we set up two initial options.
./manage.py devdata common
It will wipe away your existing local development database, leaving just your Django superusers in place so you can get to the Django admin without generating a new user/password each time. You're going to run this command at least a few times a day, if not dozens, so that would be annoying.
common scenario then sets up Lombardi's and KPK. Here is where consistency
of naming comes in handy. We should hard code
Gennaro Lombardi and
firstname.lastname@example.org to be the owner. The rest of the employees, orders,
and customers should be random-ish and generated with something like Faker.
The hard coded bits are our anchors we can use to quickly hijack a user who is of a certain persona to poke around in the UI.
We can also then setup
./manage.py devdata complex which will run the same
common scenario as above, but add in the larger, more time consuming ACME
scenario when needed.
Hopefully you're seeing how this can apply to your own application, but some other things I would likely generate are:
- Customers for each pizzaria with varying levels of previous orders, reward points, etc
- A few orders for each at various stages (new, cooking, out for delivery, etc) with random-ish data. A few simple "large cheese" orders and a couple more complicated orders.
- I'd randomly set each to run out of something. No mushrooms for you!
- Maybe we'd also set one of KPK's locations to have weird hours so they are closed during the day, but open midnight to 6am. Timezones are hard and this helps us test them.
- A common sale or promotion or two.
The main benefit of all of this is that we can quickly jump into a variety of situations in these scenarios as different user personas.
Did that little logic change I just made to coupons break something in the UI? Customers reported if you add pineapple to thin crust pizza it shows up on the checkout screen as extra cheese, but shows the kitchen staff the right information.
Easy, just hijack a user with a persona nearest your problem and adjust things a bit to your situation.
And then, when you've found and fixed your bug, run
./manage.py devdata common again
and you're back to a known state.
- Done right this generation code can often be re-used or built from the same code you're using to generate test data for your automated tests. If you are consistent with this both your ability to generate awesome test fixtures AND your ability to manually test are greatly improved
- Your frontend developers can get started more easily as they have realistic data to throw on the screen
- I've found several small UI bugs simply because much of the data was random and faked. Oh look, that wraps weirdly when the user's last name is longer than 12 characters
Since we are typically using Django, our
devdata commands are implemented
using django-click which makes it extremely
easy to make a great Django Management Command with all of the great argument parsing and power
of the Click.
We also use
pytest for our tests, but you can just call a pytest
fixture directly. You can however wrap a function that generates something
in another function so that you can share it between the two. For example:
# File: toppings/tests/fixtures.py import pytest from devdata.generation import generate_available_toppings @pytest.fixture def available_toppings(): return generate_available_toppings()
I would also encourage you to write an option into your command to clear out all of your generated test data completely. This is useful if you ever want to generate (and then wipe it all away) safely in production.
Conclusion and Challenge
I encourage you to give this technique a try. Now that I've been using in a few projects I immediately miss it in the ones that do not have it. Automated testing, manual testing and even just exploring around a new UI feature is more of a pain and hence don't happen as frequently as they should.
I promise your team's velocity will improve far beyond the time investment.
P.S If you need help convincing your boss, I'd be happy to help convince them of the benefits.