A friendly but frank discussion about the most overused excuse in software development
After 20+ years as a software developer, I can attest that "it works on my machine" is still one of my most used catchphrases, even when I have a slight hint as to its inaccuracy. It's become a safety blanket in a sea of uncertainty and poorly refactored code.
This five-word phrase has become the ultimate shield in code reviews, bug reports, and failed live demos. And for a good reason. It actually works.
Most non-developers staring at a developers workstation will see a combination of IDE's, console windows, database monitoring tools and hidden YouTube videos slightly off screen. That chaos mixed with "it works on my machine" give off this aura of elevated complexity that most people would rather not be a part of.
So 9 out 10 times, people will believe the excuse and assume that some cryptic ghost in the machine is the main cause of the issue and they'll leave us be. And our reward is that we save face and keep our hard earned credibility. At least until the next error springs up.
However, let's be honest, most of the time, this statement is not actually true. And even when it is technically accurate, it's usually masking a deeper problem that we're too embarrassed or rushed to admit.
We sometimes suck at testing.
Let's start with the obvious. When you say "it works on my machine," what you often mean is "it worked on my machine that one time I tested the happy path with the expected variables." Did you test edge cases? Error conditions? Different input sizes? Network failures?
Probably not...
The truth is that most of us have a very generous definition of "works." We run our code a few times, see it produce the expected output for our carefully crafted test case, and declare victory. Meanwhile, there are seventeen different ways it could fail that we never even considered.
And yes, this is where testing libraries and QA teams are worth their weight in gold. But these things aren't guaranteed at every workplace. I've only worked at 1 company that had a dedicated QA team and most codebases that I've had the pleasure of working on, were void of any test-driven development.
More often than not, you are the sole tester. And that level of pressure is why "it works on my machine" is so popular. Because having to do thorough edge to edge testing takes a considerable amount of time and typically it isn't scheduled in to the weekly task list.
Most developers working on mature codebases have to contend with hundreds of thousands of lines of code, written by 5-10 people throughout the years, poorly written documentation and an agile scrum meeting in the next few hours where they have to showcase yet another new feature that may or may not see the light of day.
We're usually under pressure, juggling multiple tasks, and genuinely believe our changes are safe. At least safe enough for a deployment.
The biggest issue isn't that code sometimes doesn't work. Because that's expected in any development environment. The biggest issue is that "it works on my machine" has become a conversation stopper instead of a conversation starter.
When we say this phrase, we're essentially saying: "The problem is with your setup, not my code." It shifts responsibility away from us and onto the person reporting the issue. It's defensive rather than collaborative.
And truth be told, in a world of Docker containers, CI pipelines, and cloud-based everything, “my machine” doesn't matter all that much anymore. Unless your machine is production, which, if it is, we have bigger problems. Then it’s irrelevant. The real question should always be: Does it work in the environment that actually matters?
At its core, this excuse exposes something deeper about our craft though, and that's how fragile modern software can be. How a slight version mismatch in a dependency or a config file not checked into source control can bring the whole thing down. We're building with Lego bricks on top of Jenga towers, and every deploy is a dice roll.
Still, maybe that’s the opportunity here. Instead of letting “it works on my machine” be the end of the conversation, we can treat it like a starting point. "Okay, it works locally. What’s different about staging? About production? Let’s dig." That turns a dead-end phrase into a debugging mindset.
Because let’s be honest: the more times we say "it works on my machine," the more we train ourselves not to look deeper and to get sloppier with our deployments. And when the stakes are high, production outages, customer impact, reputation hits, that little phrase can go from harmless excuse to costly crutch.
So maybe it's time we retire the phrase, or at least say it with an asterisk. “It works on my machine*, but let me actually help you figure out why it doesn’t on yours.”