How Will New Engineers Learn to Code?
How Will New Engineers Learn to Code?
I’ve been hiring and mentoring engineers for over two decades. During that time, I’ve witnessed countless new engineers evolve into senior engineers and, in some cases, engineering leaders. They learned the same way I did: by breaking things, fixing them, and having more experienced engineers review their code. However, given how people seem to be utilizing AI coding assistants lately, I keep coming back to a question: if new engineers are generating code they don’t understand, are they actually learning anything?
The Olden Days
In the olden days, when I had flowing locks and a passion for life, a new engineer would get assigned a small feature. They’d often struggle with it for hours, maybe even days. They’d write some truly terrible code and then, during code review, a more experienced engineer would patiently explain why their nested for-loops were O(n²) when they could be O(n). The new engineer would rewrite it, break something else, fix that, and gradually internalize the patterns. In more modern organizations, we’ve improved this with the introduction of onboarding buddies, pairing, and mentoring to reduce the time a new engineer spends being frustrated, stuck, and banging their head against the keyboard.
It is inefficient. It is sometimes painful. But (I think?) it works.
Every bug they create teaches them about edge cases. Every performance issue shows them the cost of poor design choices. Every outage they cause is a story they’ll be telling for years. The learning isn’t just about syntax; it is about understanding systems, thinking through consequences, and developing engineering intuition.
Oh Brave New World
Now I watch new engineers prompt ChatGPT or Cursor to generate entire functions. The code often works. It’s frequently better than what they would have written themselves. But I wonder if I ask them to explain what the code does, or why it’s structured that way, will I get blank stares?
Last week, one of our engineers used AI to generate some tests for a React app. It produced 58 tests, hundreds of lines of code, all of which passed perfectly and looked real pretty running in CI. When I dug into the code during a code review, however, I discovered that the tests mocked away most of what needed to be tested. I am going to surmise that because the AI is trained to find the shortest path to a solution, and mocking components is distinctly easier than working out how to test them, it went down that route. This is not the engineer’s fault. They have only ever written a handful of tests, and like many new engineers, testing is a new concept for them. They didn’t really understand that the impressive-looking test code was the Emperor’s new clothes.
A Counterargument
But here’s where I need to check my assumptions. Maybe I’m just that old engineer who complained when IDEs started auto-completing code (I didn’t, BTW. I am the dude who thinks linting and autocorrect are awesome because I am messy, but let’s imagine). “Back in my day, we memorized every method for every object and knew what order ln uses for arguments.”
Do new engineers really need to understand the intricacies of implementing a quicksort when they’ll never write one in production? Does anyone know how to manage memory manually in a world of garbage-collected languages? Do they need to learn how to set up the boilerplate for a React app? Is AI just eliminating boilerplate that nobody should have to write anyway? (Hello, I see you there, Java.)
There’s truth to this. I haven’t written a linked list implementation in production code in fifteen years. Much of what AI generates really is boilerplate: CRUD operations, basic API integrations, standard React components. Perhaps AI is simply lowering the bar, allowing new engineers to focus on higher-level problems rather than getting bogged down in syntax minutiae.
Where Now Innovation?
My real worry is that innovation in software doesn’t come from following patterns; it comes from understanding systems deeply enough to see new ways to solve problems. The engineers who built the frameworks we rely on didn’t get there by copying existing solutions. (Although every person who has forked an NPM module, Rubygem, or package because “it wasn’t quite right for me” did definitely rely on some prior art. Also, please stop.) They understood the problems at a fundamental level and imagined new approaches.
AI models generate code based on their training data, which consists of millions of examples of how things have been done before. If new engineers are primarily learning from AI-generated code, they’re learning to solve problems the way they’ve always been solved. Where does the next breakthrough come from? Who’s going to invent the next React, the next Kubernetes (in fairness, I am not sure I want the next Kubernetes, but examples needs must), or the next paradigm shift, if everyone’s learning from a model trained on yesterday’s solutions? Or, less dramatically, how do these engineers solve new problems for which there isn’t an AI-generated answer?
Think about the engineers who created the tools we now take for granted. They didn’t have AI assistants. They understood HTTP so deeply that they could envision REST. They fought with deployment pain until they imagined containers. They struggled with callback hell long enough to design promises and the async/await syntax. They lived with the problems until the solutions became obvious.
Finding Balance
So where does this leave us? The answer is not to ban AI tools; that ship has sailed, and we would be foolish to try it. But we also cannot simply hand new engineers AI assistants and expect them to develop into strong senior engineers without deliberate intervention.
Here’s what I’m toying with as ideas:
Code archaeology sessions: We take AI-generated code and dissect it. Why did the AI make these choices? What are the tradeoffs? What would happen if we changed this code?
No-AI zones: Some problems, maybe during onboarding, must be solved without AI assistance. Yes, it’s artificial, but so is removing training wheels from a bike.
Bug hunts in AI code: We deliberately use AI-generated code that contains subtle bugs, making finding and fixing them a learning exercise.
First principles thinking: Before reaching for AI, we require new engineers to write out their approach in plain English. We often already have user stories; why not expand on them with implementation and design? What are they trying to solve? What are the constraints? Only then can they use AI to help implement. Or the reverse: engineers use AI to generate specifications, to-do lists, and a plan, and then write code against that plan: AI TDD, as it were.
I don’t think any of these are ideal or perfect but they are a starting point.
The Path Forward
The hard truth is that we do not yet know what this will mean for our industry. We’re conducting a real-time experiment with the next generation of engineers. Will they be less capable because they didn’t struggle through the fundamentals? Or will they be more capable because they can focus on higher-level architecture and business problems?
What I do know is that good engineering is also more than writing code. It’s about talking to end users, turning needs into solutions, understanding tradeoffs, thinking systematically, debugging methodically, and knowing when to take a shortcut or break a rule. These skills are developed through experience and mentorship, not by writing or copying and pasting code.
I want to use AI to eliminate the truly mundane parts of learning to code: the forgotten semicolons, the mismatched brackets, the boilerplate that teaches nothing. But we need to be intentional about preserving the struggles that actually teach. The bugs that make you understand how systems work. The performance problems that teach you about algorithms. The outage that makes you careful in the right ways.
The new engineers joining our teams today will become the senior engineers and engineering leaders of tomorrow. The code they generate with AI might work, but will they understand it deeply enough to know when it shouldn’t be used? Will they be able to innovate beyond what the models have seen? Will they be able to debug problems that don’t have pat AI answers?
I don’t have all the answers, but we need to be asking these questions now, before we accidentally create a generation of engineers who can generate code but can’t actually engineer.