Living as a Developer in a World of AI
So this is my first blog post on the new website.
I wanted to talk a bit about the impact AI has had on my career and how I think we, as developers, can evolve and adapt in this world.
I started building my first website when I was just a child at 12. I had no idea what coding was, I did not even know that what I was about to learn had a name.
It started with curiosity after a friend of mine had created a website for his online radio station. I remember opening it and just being impressed that something like that could exist because someone decided to make it.
At some point, I noticed a small label in the corner that said “made with Wix.” and curiosity got the better of me. I clicked it.
That click probably sparked a fire that to this day is still burning.
I was suddenly looking at an interface full of things I did not understand, buttons, menus and other unknown elements. It felt overwhelming, I could drag things around, I could change text and I could publish something to the internet.
After a while I made my first website. It was messy, It was ugly, full of animations thrown together, it was using Flash. But it was mine.
That feeling never really left.
Fast forward to 2026. I have been through school, university, multiple jobs, and more than a few shifts in ambition.
For a long time, I was convinced I would end up in game development. That belief carried me into a games development course at university. At 18, it felt like the obvious path.
But life rarely follows the version of the future you sketch out in your teens.
While still studying, I took my first job as a web developer. It was meant to be temporary. It was not. Over time, I realised that what I actually loved was not just games, but systems. Watching something move from a vague idea to a working product. Solving problems that affected real users.
This was long before AI entered my workflow. Back then, large language models were still research projects, not coding assistants sitting in your editor.
My first role was in a tiny startup agency: three young developers building marketing tools for housing developers. No big engineering department. No safety net. Just features that had to ship.
That is where I properly learned PHP, Laravel, Vue. More importantly, that is where I learned what a framework really is, how the web actually works, what happens beyond localhost, and how many moving parts sit behind a “simple” feature.
Coding itself was not new to me. I had done three years of Java at university and four years of C++ in high school. I knew loops. I knew classes. I knew how to make things compile.
What I did not know was how to build software.
There is a difference.
In school, the problem is well-defined. The inputs are clean. The output is known. You are graded on correctness.
In the real world, there is no answer sheet.
You are handed ambiguity.
You are handed half-formed ideas from non-technical stakeholders. You are handed deadlines that do not care how elegant your architecture is. You are handed legacy code written by someone who left six months ago and never documented anything.
And you figure it out.
That first job taught me Git in a team environment, not just solo projects. It taught me code reviews that actually challenge your thinking. It taught me that “it works on my machine” means nothing if production disagrees. It taught me that shipping something imperfect but usable often beats polishing something that never sees the light of day.
Over the years, through different roles and growing responsibilities, I moved into senior positions before AI ever became part of the daily toolkit.
So when AI finally arrived, it did not shape my foundations.
It met them.
By the time LLMs started creeping into IDEs and workflows, I was already making architectural decisions. I was already responsible for systems other people depended on. I was reviewing pull requests, mentoring juniors, thinking about long-term maintainability instead of just getting features across the line.
AI did not teach me how to think about software.
It stepped into an environment where that thinking was already formed.
And that changes the dynamic completely.
When you learned to build software the hard way, before AI assistants existed, you do not mistake it for magic.
You see its seams.
You see when it overcomplicates something simple. You see when it confidently suggests an abstraction that does not belong. You see when it misses the invisible constraints that exist in every real codebase.
Most of the time, I use it the way I used to use search engines and Stack Overflow.
As a fast lookup.
As a rubber duck that talks back.
As a way to generate a starting point when I already know roughly what the end should look like.
But I never let it drive.
Because complexity and replacing critical thinking is where things fall apart.
In isolation, AI is impressive.
Ask it to write a function, and it probably will. Ask it to scaffold a small app, and it can. Ask it to explain a concept, and it usually does a decent job.
But real production systems are not isolated.
They are layered.
They have historical decisions baked into them. They have edge cases that exist because of one specific client from three years ago. They have performance constraints that only show up under real traffic. They have security considerations that are not obvious from the surface.
AI does not live inside that context.
It does not carry five years of product history in its head. It does not remember why a “bad” decision was actually the least bad option at the time. It does not sit in planning meetings where business trade-offs shape technical ones.
And that is why critical thinking is still the human domain.
If anything, AI has reinforced my belief that seniority is not about how much code you can write.
It is about judgment.
Knowing when not to abstract. Knowing when to say no. Knowing when a “quick win” will become technical debt. Knowing when a feature request is actually a product problem in disguise.
AI can generate options.
It cannot own consequences.
When something breaks in production, the model is not on call. You are.
And that responsibility sharpens your thinking in a way no autocomplete ever will.
What I have noticed, though, is this:
AI widens the gap.
For experienced developers, it can be a multiplier. It removes friction from the repetitive parts. It speeds up drafting. It can help explore ideas quickly.
For inexperienced developers, it can become a crutch.
If you outsource understanding too early, you build systems you cannot reason about. And when complexity inevitably arrives, you are stuck maintaining something you never truly understood.
That worries me more than AI replacing developers.
Not that it will remove the job.
But that it might dilute the learning process that made many of us competent in the first place.
So my relationship with AI is practical.
I use it.
I benefit from it.
But I do not rely on it to think for me.
The fundamentals still matter. Architecture still matters. Clear communication still matters. Understanding trade-offs still matters.
If anything, they matter more now.
Because when generating code becomes cheap, thinking becomes the differentiator.
And that, ironically, makes building software in 2026 feel less threatening than I first expected.
Not obsolete.
Just… different.
And for full transparency: this article was AI-assisted.
Not written by AI alone. Not generated from a single prompt and pasted without thought. But shaped with it. Refined with it. Challenged by it.
I wrote the ideas. I adjusted the tone. I rewrote sections that did not feel like me. I pushed back where it sounded too optimistic. I kept what resonated and removed what did not.
That, to me, is the realistic version of AI in 2026.
Not replacement.
Collaboration.