All articles
Opinion9 min readDecember 2025

I Spent 25 Years Adding Debug.Log. I'm Done.*

From printf to breakpoints to profilers — the tools changed, the workflow didn't. A veteran engine programmer on why we've been stuck.

Kara Lindström
Lead Engine Programmer, Ironveil Studios

We replaced printf with a prettier printf and called it progress.

1999

My first shipped game was a side-scroller for Windows 98. When something went wrong — and it always went wrong — I'd add a printf to stdout, rebuild, and run it again. The entire debugging workflow fit in one sentence: guess where the problem might be, print something there, see if you guessed right.

The game shipped with 340 print statements still in the code. I was 22 and thought this was normal.

2004

By the mid-2000s I was working on a PS2 title with a proper debugger. Breakpoints! Watch windows! I could pause execution and inspect state. This felt like the future. No more print statements. No more guessing.

Except you still had to guess. You set a breakpoint where you thought the bug might be. If you guessed wrong, you moved the breakpoint. If the bug was timing-dependent, the breakpoint changed the timing and the bug disappeared. We called these "heisenbugs" and laughed, but they cost us weeks.

I was 27 and thought this was normal.

2011

Profilers got good. Really good. We could see frame times, memory allocation patterns, cache miss rates. The data was beautiful. Flame graphs, heat maps, timeline views. For the first time, we could see what our games were actually doing at runtime — in aggregate.

But ask "why is the boss fight framerate dropping?" and the profiler would show you a spike in the physics system. Why was physics spiking? Because a trigger volume was overlapping with 47 corpse ragdolls that nobody cleaned up because the despawn timer was set wrong because a designer changed a value in a config file three weeks ago. The profiler shows you the symptom. Finding the cause is still archaeology.

I was 34 and starting to wonder if this was normal.

2017

The AI revolution hadn't quite hit game development yet, but automated testing had. We built elaborate test harnesses. Integration tests. Behavior tests. Smoke tests. Soak tests. Our CI pipeline ran for 4 hours and gave us confidence that the game worked under the conditions we thought to test.

The bugs that shipped were never the ones we tested for. They were emergent — strange interactions between systems that nobody predicted. A weather system affecting AI pathfinding because both read from the same noise generator. An inventory system leaking references because the UI cached items that were pooled by the combat system.

The tests passed. The game was broken. I was 40 and I knew this wasn't normal.

2023

AI coding assistants arrived, and they were genuinely useful. They could read code, suggest fixes, explain patterns. I paired with them daily. But every time I hit a runtime bug — the kind where the game does something unexpected during play — the conversation went the same way:

"The player's damage seems too high." "Looking at the code, the damage calculation is: base * modifier. The modifier is capped at 1.5x." "But in-game it's doing 3x damage." "I can only see the source code. I'd need runtime data to diagnose this further."

The AI could read the blueprint but couldn't walk through the building. It was the smartest engineer I'd ever worked with, and I had to become its eyes and ears for the one thing that mattered most: what was actually happening.

I was 46 and I was tired.

The Pattern

Here's what hasn't changed in 25 years: the fundamental debugging workflow is guess-and-check.

1. Something goes wrong 2. You form a hypothesis about why 3. You instrument (print, breakpoint, profiler, test) 4. You check if your hypothesis was right 5. If not, go to step 2

The tools at step 3 got better. Dramatically better. But the workflow — the fact that you start from hypothesis rather than observation — hasn't changed since printf.

Think about that. In 25 years, we've made the "check" step prettier. We haven't changed the process at all.

Medicine figured this out centuries ago. You don't ask a patient to reproduce their symptoms in a lab. You examine the patient. You run diagnostics on the actual living system. You look at what's happening, and the diagnosis follows from observation.

What "Done" Means

I'm not done debugging. I'm done with the workflow where I have to predict what information I'll need before I know what's wrong.

I want to ask my running game a question and get an answer. Not add a log statement and reboot. Not set a breakpoint and hope the timing holds. Not read a profiler and reverse-engineer the cause from the symptom.

"Why is the boss fight too easy?"

That's the question. The answer exists — in the running game, right now, in memory. Every variable, every object, every relationship. The information is there. We just haven't had a way to ask for it.

I've spent 25 years adding Debug.Log to running software. The information was always there. We just never learned to ask.


Kara Lindström is Lead Engine Programmer at Ironveil Studios, currently shipping her eleventh title — her first where she hasn't added a single print statement. She doesn't miss them.

*This is a fictional case study created to illustrate the runtime visibility gap. Real stories coming soon.

Ready to close the gap?

Start querying your running software in plain English.