Ask HN: Has AI stolen the satisfaction from programming?

70 points by marxism 12 hours ago

I've been trying to articulate why coding feels less pleasant now.

The problem: You can't win anymore.

The old way: You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do. Then write the code. Understanding was mandatory. You solved it.

The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works. And when even something does work = zero satisfaction because I don't have the same depth of understanding of the solution. Its no longer my code, my idea. It's just some code I found online. `import solution from chatgpt`

If I think about the problem, I feel inefficient. "Why did you waste 2 hours on that? AI would've done it in 10 minutes."

If I use AI to help, the work doesn't feel like mine. When I show it to anyone, the implicit response is: "Yeah, I could've prompted for that too."

The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.

The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

Am I alone in this?

Does anyone else feel this pressure to skip understanding? Where thinking feels like you're not using the tool correctly? In the old days, I understood every problem I worked on. Now I feel pressure to skip understanding and just ship. I hate it.

bobbyprograms a minute ago

Yes in some ways. But in other ways it makes new magic as it’s so cool to see the things I can build. I’m making a compiler with it and have learned so much that would have taken years otherwise. So it has taught me a lot and I’m better for it at least now.

conductr 11 hours ago

If programming is woodworking, using AI is ikea assembly except they packed most the wrong parts in the box so I have to deal with customer service to go back and forth to get the right parts and the hardware parts don’t always function as intended leaving me to find my own.

It’s a different, less enjoyable, type of work in my opinion.

  • 000ooo000 6 hours ago

    This analogy really hammers home how ridiculous it is to put 'vibe coding' or 'prompt engineering' in the same camp as software development. Imagine telling fellow woodworkers about your latest work and how Ikea built it for you, with a straight face.

    • lcnPylGDnU4H9OF 5 hours ago

      > Imagine telling fellow woodworkers about your latest work and how Ikea built it for you, with a straight face.

      I mean, if someone goes to Ikea but can't get it to build a working cabinet (with functioning and practical locks...), then perhaps the ability to do that is something to brag about.

  • yomismoaqui 3 hours ago

    Better woodworking analogy:

    If manual coding is a saw, using agentic AI coding is a table saw.

    It can make some work faster but there are still things done better with a manual saw. And if you don't learn how to use it well you can lose some fingers.

  • D13Fd 10 hours ago

    Plus the item you build may not be exactly what you initially requested, and you'll have to decide whether it's something you are willing to accept.

  • butlike 11 hours ago

    > It’s a different, less enjoyable, type of work

    This is an elegant way of putting it. I like it

tpoacher 10 hours ago

Not necessarily an identical thought to OP, but, anecdotally (n=1), my experience teaching the exact same course on Advanced Java Programming for the last 4 years has been that the students seem to be getting more and more cynical, and seem to think of programming as an art or as a noteworthy endeavour in itself less and less. Very few people have actually vocalised the "why do I even need to learn this if I can write a prompt" sentiment out loud, but it has been voiced, and even from those who don't say it there's a very definite 'vibe' that is all but screaming it.

Whereas the vibe in the lecture theatre 4 years ago was far more nerdy and enthusiastic. It makes me feel very sorry for this new generation that they will never get to enjoy the same feeling of satisfaction from solving a hard problem with code you thought and wrote from scratch.

Ironically, I've had to incorporate some AI stuff in my course as a result of needing to remain "current", which almost feels like it validates that cynical sentiment that this soulless way is the way to be doing things now.

  • raw_anon_1111 4 hours ago

    I have no idea where this romanticism of software development comes from. I’ve been in the field professionally for 30 years across 10 jobs - everything from startups, to boring old enterprise companies, to small to midsize “lifestyle companies” and BigTech. Most people see the job as a vocation to make money and not a passion.

    Scott Hanselman talked about “Dark Matter Developers” in 2012.

    https://www.hanselman.com/blog/dark-matter-developers-the-un...

  • nxor 10 hours ago

    Has the school changed?

    And can we assume that because AI has made it easy to solve some hard problems, other hard problems won't arise?

    Not that I don't agree

    And hasn't the internet generally added to this attitude?

    And if it makes you feel any better, as someone around that age, this environment seems to have also led some of us to go out of our way to not outsource all our thinking

  • andy99 10 hours ago

    I taught intro to programming ~15-20 years ago. Back then everyone just copied each other’s assignments. Plus ça change

  • abnercoimbre 10 hours ago

    The OP said coding now feels like:

    > import solution from chatgpt

    Which reminded me of all the students in classes (and online forums) mocking non-nerds who wanted easy answers to programming problems. It would seem the non-nerds are getting their way now.

themafia 11 hours ago

> That's the labor-saving promise.

Where are the labor saving _measurements_? You said it yourself:

> You'd think about the problem. Draw some diagrams. Understand what you're actually trying to do.

So why are we relying on "promises?"

> If I use AI to help, the work doesn't feel like mine.

And when you're experiencing an emergency and need to fix or patch it this comes back to haunt you.

> So all credit flows to the AI by default.

That's the point. Search for some of the code it "generates." You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.

> Am I alone in this?

I find the thing to be an overhyped scam at this point. So, no, not at all.

  • LordDragonfang 11 hours ago

    > You will almost certainly find large parts of it, verbatim, inside of a github repository or on an authors webpage. AI takes the credit so you don't get blamed for copyright theft.

    Only if you're doing something trivial or highly common, in which case it's boilerplate that shouldn't be copyrighted. We already had this argument when Oracle sued Google over Java. We already had the "just stochastic parrots" conversation too, and concluded it's a specious argument.

    • heavyset_go 10 hours ago

      > We already had this argument when Oracle sued Google over Java.

      "It's boilerplate therefore it isn't IP" isn't the argument that was made by Google, nor is it the argument that the case was decided upon.

      It was decided that Google's use of the API met the four determining factors used by courts to ascertain whether use of IP is fair use. The court found that even though it was Oracle's copyrighted IP, it was still fair use to use it in the way Google did.

      https://en.wikipedia.org/wiki/Google_LLC_v._Oracle_America,_...

    • themafia 10 hours ago

      > in which case it's boilerplate that shouldn't be copyrighted

      Let's say it's boilerplate code filled with comments that are designed to assist in understanding the API being written against. Are the comments somehow not covered because they were added to "boilerplate code?" Even if they're reproduced verbatim as well?

      > We already had the "just stochastic parrots" conversation too

      Oh, I was not part of those conversations, perhaps you can link me to them? The mere stated existence of them is somewhat underwhelming and entirely unconvincing. Particularly when it seems easy to ask an LLM to generate code and then to search for elements of that code on the Internet. With that methodology you wouldn't need to rely on conversations but on actual hard data. Do you happen to know if that is also available?

gngoo 4 hours ago

I really like building products, and with AI now I can just offload huge parts of the technical duties, and do the actual product building much faster. For me this is where the real satisfaction is. Yes of course, there was a lot of satisfaction with doing it myself, fixing a bug, problem or finally implementing something after a long grind.

But honestly I do not miss it at all. The further AI coding advances, the easier it becomes to build and iterate over small products (even if they just start out as MVPs). And the more I actually feel in my element. I understand why people dislike it, but it feels as if these tools where specifically made for me; and I am getting more and more exited while these keep getting better.

In a perfect world, I'd see no code at all and just tell the AI what I want, and a blackbox implementation with my product appears that I start to sculpt down to something I can work with and serve to users. That would be my ultimate satisfaction.

mattlondon 11 hours ago

> The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing

I'd disagree. For me, I direct the AI to implement my plan - it handles the trivia of syntax and boilerplate etc.

I now work kinda at the "unit level" rather than the "syntax level" of old. AI never designs the code for me, more fills in the gaps.

I find this quite satisfying still - I get stuff done but in half the time because it handles all the boring crap - the typing - while I still call the shots.

  • mpliax 10 hours ago

    Don't you have to go over whatever the chatbot spurts? Isn't that part more boring than writing the code yourself ?

raw_anon_1111 8 hours ago

My satisfaction in programming since I started doing it professionally in 1996 was that companies with money let me exchange some of their money for my labor and I could take that money and then exchange it for goods and services to support my addiction to food and shelter.

AI just like IDEs before it makes it easier for me to complete my labor and have money appear in my account.

There are literally at least a dozen things I would rather do after getting off of work than spending more time at a computer.

  • saulpw 3 hours ago

    So how come you're here reading and commenting on Hacker News? Is this just down-time at work, or maybe you have some interest in computing technology beyond just the paycheck?

    • raw_anon_1111 2 hours ago

      I am not going to ask you to go through my comments. But if you look at my submissions..

      https://news.ycombinator.com/submitted?id=raw_anon_1111

      You’ll see that I’m much more interested in business and industry trends than the technology itself - technology as an enabler to make money

      But it’s more about giving my brain a break between exercising and my recent(ish) hobby of learning Spanish. My wife and I are planning to be in Costa Rica (or some other Spanish speaking country in us time zones) six-eight weeks every winter starting next year.

CuriouslyC 11 hours ago

To be honest the only time I got satisfaction out of programming in the past was when I programmed a really hard algorithm or created a really beautiful design, and the AI doesn't replace me there, it just automates the menial part of the work.

saulpw 11 hours ago

I agree completely, you are not alone! I've heard the argument "if you don't like AI just don't use it" but there is this nagging feeling just as you describe. Like the mere existence of AI as a coding tool has sucked all the dopamine out of my brain.

leakycap 11 hours ago

> There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

You're having imposter syndrome-type response to AI's ability to outcode a human.

We don't look at compliers and beat out fists that we can't write in assembly... why expect your human brain to code as easily or quickly as AI?

The problem you are solving now becomes the higher-level problem. You should absolutely be driving the projects and outcomes, but using AI along the way for programming is part of the satisfaction of being able to do so much more as one person.

loveparade 7 hours ago

Reading HN I seem to be in the minority but AI has made programming a lot more fun for me. I've been an engineer for nearly 25 years and 95% of the work is rather mindless boilerplate. I know exactly what I need to do next, it just takes time and iteration.

The "you think about the problem and draw diagrams" part of you describe probably makes up less than 5% of a typical engineering workflow, depending on what you work on. I work in a scientific field where it's probably more than for someone working in web dev, but even here it's very little, and usually only at the beginning of a project. Afterwards it's all about iteration. And using AI doesn't change that part at all, you still need to design the high level solution for an LLM to produce anything remotely useful.

I never encountered the problem of not understanding details of the AI's implementation that people here seem to describe. I still review all the code and need to ask the LLM to make small adjustments if I'm not happy with it, especially around not-so-elegant abstractions.

Tasks that I actively avoided previously because they seemed like a hassle, like large refactorings, I no longer avoid now because I can ask an AI to do most of it. I feel so much productive and work is more satisfying because I get to knock out all these chores that I had resistance to before.

Brainstorming with an AI about potential solutions to a hard problem is also more fun for me, and more productive, than doing research the old ways. So instead of drawing diagrams I now just have conversations.

I can't say for certain whether using LLMs has made me much more productive (overall it likely has but for certain tasks it hasn't), but it definitely has made work more fun for me.

Another side effect has been that I'm learning new things more frequently when using AI. When I brainstorm solutions with an AI or ask for an implementation, it sometimes uses libraries and abstractions I have not seen before, especially around very low level code that I'm not super familiar with. Previously I was much more likely to use or do things the one way I know.

mhaberl 11 hours ago

>The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing.

That’s the promise, but not the reality :) Try this: pick a random startup idea from the internet, something that would normally take 3–6 months to build without AI. Now go all in with AI. Don’t worry about enjoyment; just try to get it done.

You’ll notice pretty quickly that it doesn’t get you very far. Some things go faster, until you hit a wall (and you will hit it). Then you either have to redo parts or step back and actually understand what the AI built so far, so you can move forward where it can’t.

>I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

It was "stupid" then - better alternatives already existed, but you do it to learn.

> Am I alone in this?

absolutly not but understand it is just a tool, not a replacement, use it and you will soon find the joy again, it is there

  • saxenaabhi an hour ago

    That's not my experience. Over christmas I re wrote a restaurant POS application from laravel to vue/wrangler using bolt + chatgpt. My exact steps:

    1) I took my db schema and got chatGPT to convert it to typescript types and stub data.

    2) Uploaded these types to bolt and asked it one by one to create vue components to display this data(Catalog screen, Payment Dialogs, Tables page etc) and to use fetcher functions that return stub data.

    3) Finally I asked it replace stub data with supabase.rpc calls, and create postgres functions to serve this data.

    While I had most of the app done in a few days, testing, styling and adding bug fixing took a month.

    Some minor stuff was done manually by me: Receipt printer integration integration because bolt wasn't good at epson xml or related libraries that time.

    Finally we released early feb and we received extremely good feedback from our customers.

    However now I'm using claude and even higher percentage of code is generated by it now. Our feature velocity is also great. After launch we have added following features in 6m

    1) Split Table Payments 2) Payment Terminal Integration 3) Visual Floor plan viewer 4) Mobile POS for waiters without tablet 5) Reports, Dashboard, Import/Export 6) Loyalty programs with many different types of programs 7) Self-service Webshop with realtime group ordering 8) Improved tax handling 9) Multicourse orders("La'suite") 10) Many other smaller features

    This would be very hard to achieve without AI for most one-person engineering teams. Although tbf not impossible.

    > The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

    I think here the OP introduces a strawman since as many people have pointed out, the labour saving happens in automating menial tasks and no one sane should give up "understanding the details".

    > >I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

    On the contrary. Reading ToyDB[1] source code helped me understand MVCC and Isolation levels. That's knowledge that's valuable for a systems architect since at the end LLMs are just fancy word generators.

    [1] https://github.com/erikgrinaker/toydb

baq 11 hours ago

It’s the opposite for me. I can get so much more of what I want done built quicker, and if I’m not familiar with a framework, it isn’t an issue anymore unless we’re talking about the bleeding edge.

nerdsniper 10 hours ago

AI makes me a lot more adventurous in terms of the projects I take on. I usually end up having to rewrite everything from scratch after POC proves that my vision is possible and actually works - including the whole old-school RTFM.

It’s a huge help for diving into new frameworks, troubleshooting esoteric issues (even if it can’t solve it its a great rubber duck and usually highlights potential areas of concern for me to study), and just generally helping me get in the groove of actually DOING something instead of just thinking about it. And, once I do know what I’m doing and can guide it method by method and validate/correct what it outputs, it’s pretty good at typing faster than I can.

sss123123 4 hours ago

I think AI hasn’t stolen the satisfaction — it just shifted where satisfaction comes from. Before, we felt proud of solving syntax or logic problems ourselves. Now, the joy is in asking better questions and evaluating better answers. It’s less about typing and more about thinking. Maybe it’s not coding that feels empty — it’s that our sense of mastery hasn’t caught up with the new workflow.

journal 11 hours ago

It's draining. I think I've read more synthetic text in the last three years than all text I've ever encountered in life.

andy99 11 hours ago

I still find it’s only useful for writing code you don’t need to understand. I would never vibe code something that I needed to know how it worked.

It’s just going to take time for “best practice” to come around with this. It’s like outsourcing, for a while it seems like a good idea and it might be for very fixed tasks that you don’t really care about, but nobody does it now for important work because of the lack of control and understanding which is exactly where AI will end up. I think for coding tasks you can almost interchangeably use AI and outsourcing and preserve the meaning.

crtified 11 hours ago

Not to suggest that analogies solve anything, but perhaps it adds large-scale context to mention that throughout history various (and frequent!) events of technological disruption have had similar effect upon particular fields of work.

I used to work in land surveying, entering that field around the turn of the millennium just as digitalisation was hitting the industry in a big way. A common feeling among existing journeymen was one of confusion. Fear and dislike of these threatening changes, which seemed to neutralise all the hard-won professional skills. Expertise with the old equipment. Understanding of how to do things closer to first-principles. Ability to draw plans by hand. To assemble the datasets in the complex and particular old ways. And of course, to mentor juniors in the same.

Suddenly, some juniors coming in were young computer whizzes. Speeding past their seniors in these new ways. But still only juniors, for all that - still green, no matter what the tech. With years and decades yet, to earn their stripes, their professionalism in all it's myriad aspects. And for the seniors, their human aptitudes (which got them there in the first place) didn't vanish. They absorbed the changes, stuck with their smart peers, and evolved to match the environment. Would they have rathered that everything in the world had stayed the same as before? Of course. But is that a valid choice, professionally speaking? or in life itself? Not really.

littlecranky67 10 hours ago

AI is taking the joy out of programming the same way cameras took out the joy of painting, or the record player took out the joy of playing an musical instrument. If your financial income relies on it, there will be issues and you will have to go along with the technical innovation. If you enjoy programming for programming sake and only do it for fun, simply do not use AI. On your own free time, you are free to make that choice.

agentultra 11 hours ago

Can’t be disappointed if you don’t use it.

I’ve never met so many people that hate programming so much.

You get the same thing with artists. Some product manager executive thinks their ideas are what people value. Automating away the frustration of having to manage skilled workers is costly and annoying. Nobody cares how it was made. They only care about the end result. You’re not an artist if all you had to do was write a prompt.

Every AI-bro rant is about how capital-inefficient humans are. About how fallible we are. About how replaceable we are.

The whole aesthetic has a, “good art vs. bad art,” parallel to it. Where people who think for themselves and write code in service of their work and curiosity are displayed as inferior and unfit. Anyone who is using AI workflows are proper and good. If you are not developing software using this methodology then you are a relic, unfit, unstable, and undesirable.

Of course it’s all predicated in being dependent on a big tech firm, paying subscription fees and tokens to take a gamble at the AI slot machine in hopes that you’ll get a program that works the way you want it to.

Just don’t play the game. Keep writing useless programs by hand. Implement a hash table in C or assembly if you want. Write a parser for a data format you use. Make a Doom clone. Keep learning and having fun. Satisfaction comes from mastery and understanding.

Understanding fundamental algorithms, data structures, and program composition never gets old. We still use algebra today. That stuff is hundreds of years old.

jackdoe 11 hours ago

i absolutely feel the same

wrote recently about it https://punkx.org/jackdoe/misery.html

now at night i just play my walkman(fiio cp13) and work on my OS, i managed to record some good cassettes with non AI generated free music from youtube :) and its pretty chill

PS: use before:2022 to search

the__alchemist 11 hours ago

Here is what has changed for me: I spend less time on tedious or solved work, and focus on the interesting parts. If there's some sort of algo that I think the AI could solve cleanly, but I want to refresh my skills, maybe I do that part myself.

Note: I don't vibe-code, or use agents. Just standard Jetbrain IDEs, and a GPT-5-thinking window open for C+P.

recursivedoubts 9 hours ago

Here is how I'm using it:

Do all the stuff you mention the old way. If I have a specific, crappy API that I have to deal with, I'll ask AI to generate the specific functionality I want with it (no more than a method or two). When it comes to testing, I'll write a few tests (some simple, some complicated) and then ask AI to generate a set of tests based on those examples. I then run and audit the tests to make sure they are sensible. I always end my prompts with "use the simplest, minimal code possible"

I am mostly keeping the joy of programming while still being more productive in areas I'm not great at (exhaustive testing, having patience with crappy APIs)

Not world changing, but it has increased my productivity I think.

diob 11 hours ago

Interesting, I have yet to feel like AI automates everything.

When I need something to work that hasn't been done before, I absolutely have to craft most of the solution myself, with some minor prompts for more boilerplate things.

I see it as a tool similar to a library. It solves things that are already well known, so I can focus on the interesting new bits.

thom 10 hours ago

No, this is the exact opposite of my experience. I have little interest in the AI's code, it can be very illustrative but I find it ugly, unmaintainable (for either humans or itself after enough iterations) and regularly wrong. But it's so much better than Google at teaching me new things, and helping with the boring bits like debugging stack traces and making random throwaway visualisations. I ask it dumb questions until I'm sure I understand things, in ways I would never burden a co-worker with, or that would be impossible when faced with a narrow blog post. And I'm left to just concentrate on my craft. It doesn't feel too slow to me because there's no point arriving at the destination if I didn't enjoy the journey and turn up with the AI having forgotten to pack any trousers.

WheelsAtLarge 8 hours ago

Not at all, most programming jobs are more about getting things done than the satisfaction of creating some programming master piece. You are assigned a part of the cog and you a responsible to get it done ASAP. It's a blue collar job under the guise of a white collar job. So it's nice to get a machine to do the grunt work while you decide how the code is supposed to function.

My experience with Vibe coding has been more frustrating than fruitful. Currently, I've only been fully successful with small code snippets and scripts but I can see where it's heading.

It works for me.

jvanderbot 10 hours ago

Who do you feel this pressure from? I realize I'm not answering your question, but is it possible that pressure is your own inner critic, not any real constraint?

Ok, you don't like a particular way of working or a particular tool. In any other era, we would just stop doing using that tool or method. Who is saying you cannot? Is a real constraint or a perceived one?

Regardless, I understand the need to understand what you built. So you have a few options. You can study it (with the agent's help?), you can write your own tests / extensions for it to make sure you really get it, or you can write it yourself. I honestly think that most of those take about as long. It's only shorter when you don't want to understand it, so then we're back to the main question: Why not?

gooodvibes 11 hours ago

> The entire premise of AI coding tools is that they automate the thinking, not just the typing. You're supposed to be able to describe a problem and get a solution without understanding the details.

This isn't accurate.

> So I feel pressure to always, always, start by info dumping the problem description to AI and gamble for a one-shot. Voice transcription for 10 minutes, hit send, hope I get something first try, if not hope I can iterate until something works.

These things have planning modes - you can iterate on a plan all you want, make changes when ready, make changes one at a time etc. I don't know if the "pressure" is your own psychological block or you just haven't considered that you can use these tools differently.

Whether it feels satisfying or not - that's a personal thing, some people will like it, some won't. But what you're describing is just not using your tools correctly.

  • marxism 11 hours ago

    I think you're misunderstanding my point. I'm not saying I don't know how to use planning modes or iterate on solutions.

    Yes, you still decompose problems. But what's the decomposition for? To create sub-problems small enough that the AI can solve them in one shot. That's literally what planning mode does - help you break things down into AI-solvable chunks.

    You might say "that's not real thinking, that's just implementation details." Look who came up the the plan in the first place << It's the AI! Plan mode is partial automation of the thinking there too (improving every month)

    Claude Code debugs something, it's automating a chain of reasoning: "This error message means execution reached this file. That implies this variable has this value. I can test this theory by sending this HTTP request. The logs show X, so my theory was wrong. Let me try Y instead."

    • gooodvibes 9 hours ago

      > But what's the decomposition for?

      To get it done correctly, that's always what it's been about.

      I don't feel that code I write without assistance is mine, or some kind of achievement to be proud of, or something that inflates my own sense of how smart I am. So when some of the process is replaced by AI, there isn't anything in me that can be hurt by that, none of this is mine and it never was.

    • malux85 11 hours ago

      > When I stop the production line to say "wait, let me understand what's happening here," the implicit response is: "Why are you holding up progress? It mostly works. Just ship it. It's not your code anyway."

      This is not a technical problem or an AI problem, it’s a cultural problem where you work

      We have the opposite - I expect all of our devs to understand and be responsible for AI generated code

lordofgibbons 11 hours ago

I started (incorrectly) going down this same route of offloading the thinking and system design process to the LLM. It has a disaster.

You have to understand your problem and solution inside and out. This means thinking deeply about your solution along with the drawing boxes and lines. And only then do you go to the LLM and have it implement your solution!

I heavily use LLMs daily, but if you don't truly understand the problem and solution, you're going to have a bad time.

gooodvibes 9 hours ago

> Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle.

Why didn't the fact that Redis already existed make the whole thing feel pointless before? You could just go to github and copy the thing. I don't get why AI is any different in this regard.

netdur 11 hours ago

to be honest i still feel satisfied but when it comes to making something useful, in the past i gave up programming because of the endless repetitive tasks, you want to build something cool but wait you first need to make a auth system and by the time you finish that the cool idea is already dead because of how boring and repetitive it all is, ai coding made it fun again

abstractspoon 6 hours ago

It's the reason I won't use AI, the hard work and sometimes grim satisfaction from solving a problem is what makes it worth it

  • raw_anon_1111 4 hours ago

    No, having money appear in my bank account and formerly stock appear in my brokerage account (RSUs) makes it worth it.

cadamsdotcom 10 hours ago

Unfortunately you are asking your question inside an echo chamber.

Most commenters comment because it makes them feel good inside. If a comment helps you.. well, that’s a rare side-effect.

To truly broaden your perspective - instead of just feeling good inside - you must do more than Ask HN.

jjice 11 hours ago

I had this mindset at first. Then I found that it's fantastic at doing all the grunt work I didn't care for. I'm quite happy now that I've mostly automated away the truly boring stuff, leaving me with more time for the interesting problems.

nxor 11 hours ago

I think it has, but not necessarily from programming, rather from other pursuits. I think the hype has gone too far. That said, it hasn't stolen the satisfaction from language learning, so I still think there's things it's suited for.

layer8 10 hours ago

Understanding remains imperative in software development. I would recommend finding employers/projects that value the thoroughness and diligence of what you call the “old way”. I don’t see those going away.

samuelknight 11 hours ago

No. I can prototype in 20 mins things that would have taken me a day before.

michelsedgh 11 hours ago

Stolen as if people hate AI and dont love just chilling and have it code for them. You understand that people are choosing to and loving using AI right? Thats why it grew so much?

block_dagger 11 hours ago

I'm more thrilled building software now than I have been in my ~35 years programming. I think that means I am satisfied.

jgb1984 9 hours ago

I don't use AI. Problem solved. The software I make will be better off in the long term.

zkmon 11 hours ago

No, it didn't. The way you get your wow moment has changed. You get impressed by your skills in prompting, agentic stuff and your ability to squeeze out the best work from AI, fix its bugs, get it to review and fix your bugs and make the whole collaboration a grand success. That's not easy because now you are expected to deliver 10x output. It's the same hard work, or maybe more hard work.

gordonhart 10 hours ago

I largely agree. Thinking through the business requirements, hammering out the design, testing against the requirements, and reviewing the code were never the fun parts, but they were usually <50% of the job as an IC. Now that LLMs do the fun part (actually writing the code), those parts are all that’s left.

The job now feels quite different than the one I signed up for a decade+ ago. The only options I see are to accept that with a sigh or reject automation of the fun part and lose employability (worst case) or be nagged by anxiety that eventually that’ll happen.

eimrine 11 hours ago

AI has stolen my satisfaction from Philosophy. Now there are not the times I need to be sure in my outcomes by the ideological reasons. If I can not persuade LLM my theory is nothing. If I can persuade LLM I use LLM's thesises instead of my own.

How can I chose my political views and preferences if I need to consult about them with LLM?

  • bigfishrunning 11 hours ago

    > If I can not persuade LLM my theory is nothing.

    It's important to remember, at times like these, that the LLM is not thinking. You can't persuade it of anything; you're looking at a convincing response based on patterns in language.

    • eimrine an hour ago

      Well well well, suppose I have noticed that X is a pseudoscience, but most of Xists actively promotes it as a science. So if you ask LLM "is X a science" it tells you "X is a science developed by... using by..." because it just repeats any nonsence.

      Then I add "is X a dogma considering..." and the miracle happens! And I never claimed it requires thinking, downvoters.

travisgriggs 10 hours ago

Web Programming has stolen satisfaction from programming. At least for me.

I've coded in win32, XWindows, GTK, UIKit, Logo, Smalltalk, QT, and others since 95. I had various (and sometimes serious) issues with any of these as I worked in them. No other mechanism of helping humans interact with computation has been more frustrating and disappointing than the web. Pointing out how silly it all is (really, I have to use 3 separate languages with different computation models, plus countless frameworks, and that's just on the client side???), never makes me popular with people who have invested huge amounts of time and energy into mastering etheral library idioms or modern "best practices" which will be different next month. And the documentation? Find someone who did a quick blog on it, trying to get their name out there. Good luck.

The fact that an AI is an efficient, but lossy compression of the big pile, to help me churn it faster, it's actually kind of refreshing for me. Any confidence that I was doing the Right Thing in this domain always made me wonder how "imagined" it was. That fact that I have a stochastic parrot with sycophantic confidence to help me hallucinate through it all? That just takes it to 11.

I thought when James Mickens wrote "To Wash It All Away" (https://scholar.harvard.edu/files/mickens/files/towashitalla...), maybe someday things would get better. 10 years later, the furniture has moved and changed color some, but its still the same shitty experience.

bediger4000 11 hours ago

Understanding (of various fields) is the only reason to do programming. I only use "AI" (really, LLMs) for code review, for this very reason.

LLM code is extremely "best practices" or even worse because of what it's trained on. If you're doing anything uncommon, you're going to get bad code.

gilbetron 8 hours ago

No, managers did that years ago.

chankstein38 11 hours ago

Honestly I'm not sure I ever really got satisfaction from the coding process itself. The output is what I care about. If it's a new and interesting output then it's still your idea. The code not being written by you doesn't detract from that.

Aside from regular arguments and slinging insults at chatgpt, I've been enjoying being able to be way more productive on my personal projects.

I've been using agentic AI to explore ESP32 in Arduino IDE. I'm learning a ton and I'm confident I could write some simpler firmware at this point and I regularly make modifications to the code myself.

But damn if it isn't amazing to have zero clue how to rewrite low level libraries for a little known sensor and within an hour have a working rewrite of the library that works perfectly with the sensor!

I'll say though, this is all hobby stuff. If my day job was professional chatgpt wrangler I think I'd be pretty over it pretty quickly. Though I'm burnt out to hell. So maybe it's best.

more_corn 10 hours ago

Don’t use AI in the fully automated, big picture dehumanizing way. (It will screw up if you do that and you won’t be able to catch and correct it)

Use it in the precise, augmenting, accelerating way.

Do your own design and architecture (it sucks at that anyway) and use AI to tab complete the work you already thought through and planned.

This can preserve your ability to reason about the project and troubleshoot, improve your productivity while not turning your brain off.

qq99 10 hours ago

I absolutely love it. I find it empowers me more than ever before, and my satisfaction is at all time highs. I'm even building projects now (videogames) that I probably wouldn't have started before.

Here's where I'm at:

- Your subjective taste will become more important than ever, be it graphic design, code architecture, visual art, music, and so on for each domain that AI becomes good at. People with better taste will produce better results. If you have bad taste, you can't steer _any_ tool (AI or otherwise) into producing good outputs. So refining your taste and expanding it will become more important. re: "Yeah, I could've prompted for that too.", I see a parallel to Stable Diffusion visual art. Sure, anyone _can_ make _anything_, but getting certain types of artistic outputs is still an exercise in skill and knowledge. Without the right skill expression, they won't have the same outputs.

- Delegating the things where "I don't have time to think about that right now" feels really good. As an analog, e.g., importing lodash and using one of their functions instead of writing your own. With AI, it's like getting magical bespoke algorithms tailored exactly to your needs (but unlike lodash, I actually see the underlying src!). Treat it like a black box until it stops working for you. I think "use AI vs not" is similar to "use a library or not": you kinda still have to understand what you need to do before picking up the tool. You don't have to understand any tool perfectly to make effective use out of it.

- AI is a tremendous help at getting you over blockers. Previous procrastination is eliminated when you can tell AI to just start building and making forward progress, or if you ask it for a high level overview on how something works to demystify something you previously perceived as insurmountable or tough.

> Nothing feels satisfying anymore

You still have to realize that were it not for you guiding the process, the thing in question would not exist. e.g., if you vibecode a videogame, you start to realize that there's no way (today) that a model is 1-shotting that. At least, it isn't 1-shotting it exactly to your vision. You and AI compile an artifact together that's greater than the sum of both of you. I find that satisfying and exciting. Eventually you will have to fix it (and so come to understand parts you neglected to earlier).

It's incredibly satisfying when AI writes the tedious test cases for things I write personally (including all edge cases) and I just review and verify they are correct.

I still find I regret in the long term cases where I vibe-accept the code it produces without much critical thought, because when I need to finesse those, I can see how it sometimes produces a fractal of bad designs/implementations.

In a real production app with stakes and consequences you still need to be reading and understanding everything it produces imo. If you don't, it's at your own peril.

I do worry about my longterm memory though. I don't think that purely reading and thinking is enough to drill something into your brain in a way that allows you to accurately produce it again later. Probably would screw me over in a job interview without AI access.

diamondfist25 11 hours ago

I never liked coding.

What i like is problem solvinig.

Coding is 90% syntax 10% thinking

AI is taking away the 90% garbage, so we can channel 90% to problem solving

AlienRobot 11 hours ago

Adding to this, when using AI as an autocomplete, it feels like I'm being distracted every half second I type a character because I need to check if the autocomplete is what I was going to type or not, which feels just very annoying.

So I'm more productive, but at what cost...

lovich 11 hours ago

It took the satisfaction out of it in the sense that I can no longer be laid to do it.

For side projects no, but I use it at the level that feels like it enhances my workflow and manually write the other bits since I don’t have productivity software tracking if I’m adopting AI hard enough

  • bryanlarsen 11 hours ago

    I assume you mean "paid" instead of "laid". Textbook Freudian slip?

    • lovich 6 hours ago

      Fat fingered on the mobile keyboard. Not sure there’s Freudian slip energy there with the two letters next to each other, but perhaps. I certainly feel like companies routinely fucked me.

wahnfrieden 11 hours ago

Hell no. I'm a full-time indie dev now, so maybe I would think differently if I were trading my time for a paycheck instead of working for results with 100% equity. But now I get to tackle features and ideas I've had for years but could never justify taking the time to investigate and attempt, in part because agents are slow enough that I must parallelize them which allows me to test ideas on the side while working on my primary objectives. I still review all code and provide close technical guidance.

I already learned to appreciate working with code from "others" by working in teams and leading teams in a past life. So I don't feel as personally attached to code that comes from my own fingertips anymore, or the need for the value of my work to be expressed that way.

incomingpain 11 hours ago

I have multiple open source projects, maintaining them and coding fixes for mostly edge case bugs, as I hadnt really added new features, became tedious and I didnt want to maintain them anymore.

AI coding fixed that. Pre-AI I loved using all of the features of an IDE with an intention of speeding up my coding. Now with AI, it's just that much faster.

>The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

I've had so much satisfaction since ai coding. Ive had greater satisfaction.

alganet 11 hours ago

Think of it this way: if your problem can be solved by an LLM with the same quality, then it's not a problem worthy of a human to tackle. It probably never was in the first place, we just didn't knew.

The only exception here is learning (solving a solved problem so you can internalize it).

There are tons of problems that LLMs can't tackle. I chose two of those (polyglot programs, already worked on them before AI) and bootstrapping from source (AI can't even understand what the problem is). The progress I can get on those areas is not improved by using LLMs, and it feels good. I am sure there are many more of such problems out there.

  • marxism 11 hours ago

    I actually agree with everything you said, and I see I failed to communicate my idea that's exactly why I'm so upset.

    You said "the only exception here is learning" - and that exception was my hobby. Programming simple things wasn't work for me. It was entertainment. It was what I did for fun on weekends.

    Reading a blog post about writing a toy database or a parser combinator library and then spending a Saturday afternoon implementing it myself. that was like going to an amusement park. It was a few hours of enjoyable, bounded exploration. I could follow my curiosity, learn something new, and have fun doing it.

    And you're right: if an LLM can solve it with the same quality, it's not a problem worthy of human effort. I agree with that logic. I've internalized it from years in the industry, from working with AI, from learning to make decisions about what to spend time on.

    But here's what's been lost: that logic has closed the amusement park. All those simple, fun learning projects now feel stupid. When I see those blog posts now, my gut reaction is "why would I waste time on that? That's one prompt away." The feeling that it's "not worthy" has completely drained the joy out of it.

    I can't turn off that instinct anymore. I know those 200 lines of code are trivial. I know AI can generate them. And so doing it myself feels like I'm deliberately choosing to be inefficient, like I'm LARPing at being a programmer instead of actually learning something valuable.

    The problem isn't that I disagree with you. The problem is that I agree with you so completely that I can no longer have fun. The only "worthy" problems left are the hard ones AI can't do. But those require months of serious investment, not a casual Saturday afternoon.

    • pakitan 10 hours ago

      > The feeling that it's "not worthy" has completely drained the joy out of it.

      It was never "worthy". With the proliferation of free, quality, open source software, what's now a prompt away, has been a github repo away for a long time. It's just that, before, you chose to ignore the existence of github repos and enjoy your hobby. Now you're choosing to not ignore the AI.

    • eCa 11 hours ago

      > And so doing it myself feels like I'm deliberately choosing to be inefficient

      People have plenty of hobbies that are not the most "efficient" way to solve a problem. There are planes, but some people ride bikes across continents. Some walk.

      LLMs exist, you can choose to what level you use them. Maybe you need to detox for a weekend or two.

    • alganet 10 hours ago

      I genuinely do not understand this. You can totally still do that for learning purposes.

      The only thing you cannot do anymore is show off such projects. The portfolio of mini-tutorials is definitely a bygone concept. I actually like that part of how the culture has changed.

      Another interesting challenge is to set yourself up to outperform the LLM. Golf with it. LLM can do a parser? Okay, I'll make a faster one instead. Less lines of code. There's tons of learning opportunities in that.

      > The only "worthy" problems left are the hard ones

      That's not true. There are also unexplored problems which the AI doesn't have enough training data to be useful.

heavyset_go 10 hours ago

> The new way: The entire premise of AI coding tools is to automate the thinking, not just the typing. You're supposed to describe a problem and get a solution without understanding the details. That's the labor-saving promise.

That's the "promise", but in practice it's exactly what you don't want to do.

Models can't think. Logic, accuracy, truth, etc are not things models understand, nor do they understand anything. It's just a happy accident that sometimes their output makes sense to humans based on the statistical correlations derived during training.

> The result: Nothing feels satisfying anymore. Every problem I solve by hand feels too slow. Every problem I solve with AI feels like it doesn't count. There's this constant background feeling that whatever I just did, someone else would've done it better and faster.

Am I the only one who is not totally impressed by the quality of code LLMs generate? I've used Claude, Copilot, Codex and local options, all with latest models, and I have not been impressed on the greenfield projects I work on.

Yes, they're good for rote work, especially writing tests, but if you're doing something novel or off the beaten path, then just lol.

> I was thinking of all the classic exploratory learning blog posts. Things that sounded fun. Writing a toy database to understand how they work, implementing a small Redis clone. Now that feels stupid. Like I'd be wasting time on details the AI is supposed to handle. It bothers me that my reaction to these blog posts has changed so much. 3 years ago I would be bookmarking a blog post to try it out for myself that weekend. Now those 200 lines of simple code feels only one sentence prompt away and thus waste of time.

If you don't understand these things yourself, how do you know the LLM is "correct" in what it outputs?

I'd venture to say the feeling that models can do it better than you comes from exactly that problem: you don't know enough to have educated opinions and insights into the problem you're addressing with LLMs, and thus can't accurately judge the quality of their solutions. Not that there's anything wrong with not knowing something, and this is not meant to be a swipe at you, your skills or knowledge, nor is my intention to make assumptions about you. It's just that when I use LLMs for non-trivial tasks that I'm intimately familiar with, I am not impressed. The more that I know about a domain, the more nits I can pick with whatever LLMs spew out, but when I don't know the domain, it seems like "magic", until I do some further research and find problems.

To address the bad feelings: I work with several AI companies, the ones that actually care about quality were very, very adamant about avoiding AI for development outside of doing augmented searches. They actively filtered out candidates that used AI for resumes and had AI slop code contributions, and do the same with their code base and development process. And it's not about worrying about their IP being siphoned off to LLM providers, but about the code quality in itself and the fact that there is deep value in the human beings working at a company understanding not only the code they write, but how the system works in the micro and macro levels. They're acutely aware of models' limitations and they don't want them touching their code capital.

--

I think these tools have value, I use them and reluctantly pay for them, but the idea that they're going to replace development with prompt writing is a pipe dream. You can only get so far with next-token generators.

unnouinceput 11 hours ago

No, it didn't. Or rather it did for run of the mill coder camp wanna be programmer. Like you sound you are one. For me it's the opposite. That's because I don't do run of the mill web pages, my work instead is very specific and the so called "AI" (which is actually just googling with extra spice on top, I don't think I'll see true AI in my lifetime) is too stupid to do it. So I have to break it down into several sessions giving only partial details (divide and conquer) otherwise will confabulate stupid code.

Before this "AI" I had to do the mundane tasks of boilerplate. Now I don't. That's a win for me. The grand thinking and the whole picture of the projects is still mine, and I keep trying to give it to "AI" from time to time, except each time it spits BS. Also it helps that as a freelancer my stuff gets used by my client directly in production (no manager above, that has a group leader, that has a CEO, that has client's IT department, that finally has the client as final user). That's another good feeling. Corporations with layers above layers are the soul sucking of programming joy. Freelancing allowed me to avoid that.

  • marxism 10 hours ago

    I'm curious: could you give me an example of code that AI can't help with?

    I ask because I've worked across different domains: V8 bytecode optimizations, HPC at Sandia (differential equations on 50k nodes, adaptive mesh refinement heuristics), resource allocation and admission control for CI systems, custom network UDP network stack for mobile apps https://neumob.com/. In every case in my memory, the AI coding tools of today would have been useful.

    You say your work is "very specific" and AI is "too stupid" for it. This just makes me very curious what does that look like concretely? What programming task exists that can't be decomposed into smaller problems?

    My experience as an engineer is that I'm already just applying known solutions that researchers figured out. That's the job. Every problem I've encountered in my professional life was solvable - you decompose it, you research up an algorithm (or an approximation), you implement it. Sometimes the textbook says the math is "graduate-level" but you just... read it and it's tractable. You linearize, you approximate, you use penalty barrier methods. Not an theoretically optimal solution, but it gets the job done.

    I don't see a structural difference between "turning JSON into pretty HTML" and using OR-tools to schedule workers for a department store. Both are decomposable problems. Both are solvable. The latter just has more domain jargon.

    So I'm asking: what's the concrete example? What code would you write that's supposedly beyond this?

    I frequently see this kind of comment in AI threads that there is more sophisticated kinds of AI proof programming out there.

    Let me try to clarify another way. Are you claiming that say 50% of the total economic activity is beyond AI? or is some sort of niche role that only contributes 3% to GDP? Because its very different if this "difficult" job is everywhere or only in a few small locations.

madaxe_again 11 hours ago

Nope. I have the same dividing line as I had when I was leading development teams:

If this seems interesting to me, and I have time, I will do it.

If it is uninteresting to me, or turns out to be uninteresting, or the schedule does not fit with mine, someone else can do it.

Exactly the same deal with how I use AI in general, not just in coding.

collegeman 5 hours ago

I think you've done a good job of articulating why AI-assisted coding doesn't feel as satisfying—I agree with this sentiment and have shared it. Several ideas and experiences are keeping me afloat and making steady progress; I share them with you in case any of them resonate with you and the other commenters. TL;DR: The big shift in my mindset is from solving every coding problem myself to solving problems through a scalable collaboration—especially valuable to my startup right now because growth has been slow and we can't afford more engineers. In effect, I'm micromanaging at a certain scale that gets more work done and probably wouldn't feel good if my collaborators were human beings (who likes being micromanaged?).

I want to say up front that as a CTO, I think "the pressure to skip understanding and just ship" is a cultural failure somewhere that should be addressed by you if not by the people you work for and with. As others have pointed out here, that sort of approach to software engineering is guaranteed to create technical debt. This idea is corroborated by several articles I've read recently about the "slop" that is flowing downstream to QA teams and customers. I think as software engineers and professional people, we owe it to our colleagues and our customers to not replace working understandable software with broken black boxes.

The problem with the agile manifesto was never its mission and values. The problem with agile is the glut of terrible practices that do not scale. The problem with AI assisted coding isn't that it automates some large and tedious amount of syntax creation—it does that well. The problem with AI assisted coding is that we're trying to use it to do things that it shouldn't be doing. Almost none of the "good" work product I have seen come out of AI assisted engineering as been "one shot" solutions: planning is still a huge part of my process, and with AI assistance, I can do more planning!

The current phase of my personal development is to move from using AI assistance in one codebase on one task at a time, to using it across multiple tasks in multiple codebases. As I am writing this response to you, I have Claude working on two different problems: a complex redesign of our asset processing pipeline with backward compatibility for the thousands of assets that have already been added to our system, and debugging a stubborn defect in authentication in our Unity codebase. My approach to this is to treat these tasks like two collaborations with two other developers—my role is to guide and review their implementation, not do their work for them.

On that note, I would love to create a cultural shift here soon and start using more test-driven development in our projects. I have always loved this approach to software engineering, but I have seldom had the opportunity to put it into practice. TDD is time consuming in a way that I have found difficult to justify at the beginning of projects. But the longer a team waits to start implementing good test code, the harder the task becomes. I want to stress that it should be professional malpractice to automate TDD script design without systematic code review.

I just got back from a well-earned vacation, and the jet lag is really painful. I am looking forward to feeling better. Then, the next phase of doing more through collaboration is to leverage git worktrees. I started doing this just before my break, and I have a couple of different trees ready to start building some much-needed features. The worktrees and some excellent features in Laravel make it fast and simple to have completely separate local dev environments, with no virtual containers needed (fast and scalable!). I am pleased with how quickly the workspaces can be created—I hope to be as happy with the work that can get done inside them.

Absolutely all of this effort is aimed squarely at creating more value for my customers and our company's founders with less time and cost. But once I have the process up and running, I intend to reap some personal gains from all of this newfound productivity. I want to create games and experiences that leverage generative AI to generate novelty and story. With the support of AI coding assistants, I'll be able to start exploring those personal goals soon. In this way, I can unlock new avenues for personal growth in entrepreneurship and product design without taking on an outsized risk or dramatically changing the course of my career.

It has taken months of experimentation and painful personal growth to get to this mental space. I hope sharing these experiences is useful to you and to others. If you ever want to talk more about this challenge, I'd be open to meeting remotely: https://calendar.aaroncollegeman.com

Cheers.

collegeman 5 hours ago

I think you've done a good job of articulating why AI-assisted coding doesn't feel as satisfying—I agree with this sentiment and have shared it. Several ideas and experiences are keeping me afloat and making steady progress; I share them with you in case any of them resonate with you and the other commenters. TL;DR: The big shift in my mindset is from solving every coding problem myself to solving problems through a scalable collaboration—especially valuable to my startup right now because growth has been slow and we can't afford more engineers. In effect, I'm micromanaging at a certain scale that gets more work done and probably wouldn't feel good if my collaborators were human beings (who likes being micromanaged?).

I want to say up front that as a CTO, I think "the pressure to skip understanding and just ship" is a cultural failure somewhere that should be addressed by you if not by the people you work for and with. As others have pointed out here, that sort of approach to software engineering is guaranteed to create technical debt. This idea is corroborated by several articles I've read recently about the "slop" that is flowing downstream to QA teams and customers. I think as software engineers and professional people, we owe it to our colleagues and our customers to not replace working understandable software with broken black boxes.

The problem with the agile manifesto was never its mission and values. The problem with agile is the glut of terrible practices that do not scale. The problem with AI assisted coding isn't that it automates some large and tedious amount of syntax creation—it does that well. The problem with AI assisted coding is that we're trying to use it to do things that it shouldn't be doing. Almost none of the "good" work product I have seen come out of AI assisted engineering as been "one shot" solutions: planning is still a huge part of my process, and with AI assistance, I can do more planning!

The current phase of my personal development is to move from using AI assistance in one codebase on one task at a time, to using it across multiple tasks in multiple codebases. As I am writing this response to you, I have Claude working on two different problems: a complex redesign of our asset processing pipeline with backward compatibility for the thousands of assets that have already been added to our system, and debugging a stubborn defect in authentication in our Unity codebase. My approach to this is to treat these tasks like two collaborations with two other developers—my role is to guide and review their implementation, not do their work for them.

On that note, I would love to create a cultural shift here soon and start using more test-driven development in our projects. I have always loved this approach to software engineering, but I have seldom had the opportunity to put it into practice. TDD is time consuming in a way that I have found difficult to justify at the beginning of projects. But the longer a team waits to start implementing good test code, the harder the task becomes. I want to stress that it should be professional malpractice to automate TDD script design without systematic code review.

I just got back from a well-earned vacation, and the jet lag is really painful. I am looking forward to feeling better. Then, the next phase of doing more through collaboration is going to be leveraging git worktrees. I started doing this just before my break, and I have a couple of different trees ready to start building some much needed features. The worktrees and some wonderful features in Laravel make having completely separate local dev environments fast and simple, with no virtual containers needed (fast and scalable!). I am very happy with how quickly the workspaces can be created—I am to be as pleased with the work that can get done inside them.

Absolutely all of this effort is aimed squarely at the problem of creating more value for my customers and our company's founders with less time and cost. But once I have the process up and running, I intend to reap some personal gains from all of this new found productivity. I want to create games, and I want to create experiences that leverage generative AI for creating novelty and story. With the support of AI coding assistants, I feel like I'll be able to start exploring those personal goals soon. In this way, I get to unlock new avenues for personal growth in entrepreneurism and product design without taking an outsized risk or dramatically changing the course of my career.

It has taken months of experimentation and painful personal growth to get to this mental space. I hope sharing these experiences is useful to you and to others.