I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten. Fine for throwaway or exploratory work, but heaven help you if you're working in a project where people use LLMs to "fix" bugs generated by previous LLM generated code.
So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked. It's like a game of hot potato, if you want to game the metrics you better be the one working on greenfield code (although I suppose maintenance work has never been looked at favorably in performance review; but now the cycle of code rot is accelerated)
Im working on some website and created some custom menu. Nothing fancy. AI got it done after some tries and I was happy as web development is not my area of expertise.
After some time I realized the menu results to scrolling when it shouldn’t and wanted to make the parent container expand. This was impossible as the AI did a rather unusual implementation even for such a limited use case.
Best part: my task now is impossible to solve with AI as it doesn’t really get its own code.
I resulted to actually just looking into CSS and the docs and realized there is a MUCH simpler way to solve all of my issues.
Turns out sometimes the next guy who has to do maintenance is oneself.
> Turns out sometimes the next guy who has to do maintenance is oneself.
Over the years I've been well-served by putting lots of comments into tickets like "here's the SQL query I used to check for X" or "an easy local repro of this bug is to disable Y", etc.
It may not always be useful to others... but Future Me tends to be glad of it when a similar issue pops up months later.
On the same boat, I've learnt to leave breadcrumbs for the future quite a long time ago, and it's paid off many, many times.
After it becomes second-nature is really relaxing to know I have left all the context I could muster around, comments in tickets, comments in the code referencing a decision, well-written commit messages for anything a little non-trivial. I learnt that peppering all the "whys" around is just being a good citizen in the codebase, even if only for Future Me.
"Getting things" is a matter of performance, not about the underlying hardware. If I'm an idiot who knows nothing about programming, but every time I slam the keyboard we get good programs, then how useful is it to discuss whether I am in fact empty-headed?
So we might discuss their performance along a gradient and think on their year over year improvement. Current industry performance is of such magnitude that it has persuaded the world to adopt ChatGPT workflows as much as they have. Adjacent to code, one might look to Terry Tao and how he relates to ML workflows in math.
Its a tale worth repeating because a minuscule percentage of people know or pretend to know how it works. Our view might be a bit skewed here on hackernews but normal people believe llms are thinking machines.
I love this analogy of consultants parachuting into a project, doing the bare minimum, and passing the baton to the next person to maintain the mess.
Leadership is buying into the hype and busy turning everyone into overzealous slopmongers. Companies are pushing an “AI Fluency” framework where people are encouraged (read: forced) to use LLMs and agentic coding in every aspect of development.
In my experience, AI generated code is much higher quality than code written by external service companies. For example it will look at your code base and follow the style and conventions.
A while back someone made a post or comment about how they managed to vibe code a huge PR (1000+ of lines) to an open source project. They said they didn’t have time to read through the code but instead used tests to ensure the code was doing the right thing. Then it came out that there was a very lengthy review period where the maintainers had gone through the PR and helped fix the (rather significant) issues with it. So while the authors “didn’t have time” to review their own work the burden was shifted onto the maintainers.
This has been described a lot as “workslop”, work that superficially looks great but pushes the real burden on the receiver of the work rather than the producer.
That sounds more like an organizational problem. If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible. Previously that took the form of copying cheap templates, copying and pasting code from StackOverflow as-is without adjustments, not caring about style, using tools to autogenerate bindings, and so on. I remember a long time ago I took over a web project that a freelancer had worked on, and when I opened it I saw one large file of mixed python and HTML. He literally just copied and pasted whole html pages into the render statements in the server code.
The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.
AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.
>If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible.
Yes, this is it. The idea that LLMs somehow write this deceptive code that magically looks right but isn't is just silly. Why would that be the case? If someone finds they are good at writing code (hard to define of course but take a "measure" like long term maintainability for example) but they fail to catch bad code in review it is just an issue with their skill. Reviewing code can be trained just as writing code can be. A good first step might be to ask oneself: "how would I have approached this".
> So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked.
Offhand anecdote, 1990s
That reminds me of when the German corporation my mother worked for moved more and more production to China end of last century. All the failures that the still existing German factory had to handle by repairing them ended up in their accounts. From the top bosses' point of view, just looking at the accounting data, the China production looked clean.
Of course, unsurprisingly (with enough effort), they made it work over the years, fulfilling the prophecy. Good for China.
How you account for things shifts the narrative and then reality follows the investments made based on that.
One of the things about AI generally is it doesn't "save" work - it pushes work from the one who generates the work to the person who has to evaluate it.
> I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten.
This describes most projects I've been on where there wasn't a thorough RFC process?
Where I'm seeing the sweet spot right now:
1. Have a detailed RFC
2. Ticket out the work
3. Feed the RFC and ticket to the LLM via MCP
4. Have your refactoring and design patterns textbooks close at hand so you can steer the LLM properly when things start to get messy. "DRY this out" or "implement this using X pattern" tend to be highly effective prompts
5. Use agents or other LLMs to review code for RFC compliance, test coverage, etc. (this isn't as effective as I'd like it to be right now, skill issue probably)
6. When there are bugs, force the LLM to do TDD - say "We're observing a bug in production, here are the reproduction steps, write a failing test that covers this code path." Obviously check that the test is a real test and not slop. Then, prompt the LLM to fix the isue.
have you tried... talking to them, instead of permanently hirting their chances of staying employed in a shit economy?
its great for you principles - perfect job security, sitting up on your thrones casting judgement on entry level staffers that are forced to use LLM code to make a fast impact. maybe try teaching your juniors how to do it the right way, rather than passive aggressively impacting someones physical safety net. shame on all of you assholes.
They kept repeatedly getting an NC-17 from the MPAA and kept on resubmitting it (6 times) until just before release when they just relented, gave it an R and released it as-is.
They didn’t just keep resubmitting it. The first four times, changes were made (mostly around language and Saddam Hussein/the Devil). The final time, they felt the changes were arbitrary (it was just more language censoring) and so an exec told the board to hurry up as they had a release around the corner and it was just silently rubber-stamped.
Aren’t junior engineers the same way? Give them an assignment and see what they turn in looks good because they made sure it worked. But then it has to be frequently rewritten?
We can yell at junior engineers and make them fix the problems they've created. You can't yell at an AI, and it can't understand that it's created any problems.
I'd say is a change of paradigm, and it might be even faster if you have test-driven development... Imagine writing your tests manually, getting LLM code, trying to pass the tests, done.
Of course, golden rules are 1. write the tests yourself, don't let the LLM write them for you and 2. don't paste this code directly on the LLM prompt and let it generate code for you.
In the end it boils down to specification: the prompt captures the loosely-defined specification of what you want, LLM spouts something already very similar to what you want, tweak it, test it, off you go.
With test driven development this process can be made simpler, and other changes in other parts of the code are also checked.
The worst part of AI is the way it's aggressively pushed. Sometimes I have to turn off AI completions in the IDE just because it becomes extremely aggressive in showing me very wrong snippets of code in an incredibly distracting way. I hope when the hype dies down the way these tools are pushed on us in a UX sense is also dialed down a bit.
I feel you. I totally disabled AI completions as they actually were often sidelining me from my reasoning.
It is like having an obnoxious co-worker shoving me to the side everytime i type a new line and complete a whole block of code and asking me if it is good without regards to how many times I rejected those changes.
I still use AI, but favor a copy paste flow where I at least need to look at what i am copying and locating the code I am pasting to. At least i am aware of the methods or function names and general code organization.
I also ask for small copy paste changes so that I keep it digestible. A bonus point is that ChatGPT in firefox when the context gets too big, the browser basically slowsdown locks and it works as a form extra sense that the context window is too big and LLM is about to start saying non-sense.
That said AI, is an amazing tool for prototyping and help when out of my domain of expertise.
So one really big thing that can make the AI autocomplete super useful is to follow the old method from "Code Complete", Pseudocode Programming Process (PPP).
Write a comment first on what you intend to do, then the AI generally does a good job auto-completing below it. I mean you don't have to "sketch everything out", but just that the AI is using the page as context and the comment just helps disambiguate what you want to do and it can autocomplete significant portions when you give it a nudge with the comment.
I've almost fully converted to agentic coding, but when I was using earlier tools, this was an extremely simple method to get completions to speed you up instead of slow you down.
Indeed. That’s my only interaction with AI coding.
Every time Visual Studio updates, it’ll turn back on the thing that shoves a ludicrously wrong, won’t even compile, not what I was in the middle of doing line of garbage code in front of my cursor, ready to autocomplete in and waste my time deleting if I touch the wrong key.
This is the thing that Microsoft thinks is important enough to be worth burning goodwill by re-enabling every few weeks, so I’m left to conclude that this is the state of the art.
Thus far I haven’t been impressed enough to make it five lines of typing before having to stop what I’m doing and google how to turn it off again.
Agents are great (in so far the models are able to complete the task). Autocomplete copilot just feels like bad UX. It's both, not super effective and also disruptive to my thinking.
I think it depends on the context. If I've been writing the same language and frameworks and code solidly for a few months, then autocomplete gets in the way. But that rarely happens, I like to keep trying and learning new things.
If I'm familiar with something (or have been) but not done it in a while, 1 - 2 line autocomplete saves so much time doing little syntax and reference lookups. Same if I'm at that stage of learning a language or framework where I get the high level concepts, principals, usescases and such, but I just haven't learned all the keywords and syntax structures fluently yet. In those situations, speedy 1 - 2 line AI autocomplete probably doubles the amount of code I output.
Agents is how you get the problems discussed in this thread: code that looks okay on the surface, but falls apart on deeper review, whereas 1 - 2 line autocomplete forces every other line or 2 to be intentional.
The most annoying is when I'm trying to think through a data structure. While I'm trying to deeply think through every member of a class, its type, relationships, etc., this zealous fellow acts like a toddler that knows no way to stay shut unless snoozed off.
"AI" autocomplete has become rather like mosquitos buzzing around my head that I have to constantly swat away. I'm likely to shut it all off soon, it's just become irritating.
JetBrains IDEs have an option to enable AI inline suggestions on demand via a keypress. I really like it. It saves some "boring" typing, while not being annoying.
I'm pretty sure Cursor also has something similar?
Yeah, it’s just horribly wrong in my experience and a complete distraction. Code completion for functions in the project is another story and that has been around for ages.
My little experience with AI coding, using copilot on Eclipse, was mixed...
Context: I work with an old Java source code that uses Servlets and implements his own web framework. There is a lot of code without tests or comments.
The autocomplete, I find it useful. Specially doing menial, very automatic stuff like moving stuff when I refactor long methods. Even the suggestions of comments looks useful. However, the frequency with it jumps it's annoying. It needs to be dialed down somehow (I can only disable it). Plus, it eats the allowed autocomplete quota very quickly.
The "agent" chat. It's like tossing a coin. I find very useful when I need to write a tests for a class that don't have. At least, allows me to avoid writing the boiler player. But usually, I need to fix the mocking setup. Another case when it worked fine, it's when helped me to fix a warning that I had on a few VUE2 components. However, in other instances, I saw miserable falling to write useful code or messing very bad with the code. Our source code is in ISO8859-1 (I asked many times to migrate it to UTF8), and for some reason, sometimes Copilot agent messes the encoding and I need to manually fix all the mess.
So... The agent/chat mode, I think that could be useful, if you know in what cases it would do it ok. The autocomplete is very useful, but needs to be dialed down.
"If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?"
To answer this question: To justify the investment.
No, for real, LLM solutions costs a shitload of money, and every investment needs to be justified on a management level. That's the reason they are enforcing it.
My bigger problem is that there are a whole lot of "developers" who do not read the generated code properly, why do you end up in review sessions where the developer does not know what is happening and why the code acts in a particular way. And we have not yet discussed clean code principles throughout the whole solution...
I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.
> Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy
that is all you heard in the 80-90s, people over the pond showing off how many hours per week they worked. like... how is that something to be proud of?
So wauw, you spend 12hrs+ per day working , had no free evenings, zero paid holidays. And that is supposed to impress who?
What happened in the 80s too is that politics told us automation would bring a 3 days work week, which never materialized. But now we have to trust the same people, moved by the same greed, that this time it'll be different
It's the same breed of people, bred from the same system, not literally the same people but the people following on the footsteps. Greed is the constant.
Hell, it has been going much longer than only 40 years ago, on "In Praise of Idleness" Bertrand Russell talked about how industrialisation with its automation should be helping workers to work less 90 years ago.
France and UK are in giant fiscal crises. The German economy is in the toilet with no hope in sight. All of them have seen large deterioration of the quality of health care in the last decade. The EU leaders care more about Ukraine and destroying all privacy than any economic reform.
No? Unless you don't count intra-company worker committees. Those fulfill much of the same role and are mandatory above a (relatively low) company size.
Which may be related to unions having been actively persecuted (to the extend of actual state-sanctioned torture, disappearences, and bona-fide massacres against people involved with unions - and people living next door. With active support of US civilian and military leadership, that is).
Not when your economy is reduced to mass importing of third world labor to keep salaries down and your economy going (i.e. Italy, France, UK and Germany).
You do realize that a lot of people in EU are working 4 days a week? Are you aware that McDonalds employee can live by having only one job?
EU decided to distribute the productivity benefits instead of hoarding it into stock market gains like US does.
Btw, you do realize that US commodified investing in Us stocks? Whole world can easily invest in US stock market.
Basically, instead of taking care of their own citizens, US economy is paying out gains to foreigners.
There are no real promotions. It‘s about employment duration. In Bavaria you have like 12 salary groups. For white collar workers 9 is entry level, 10 is for some experience, 11 for experienced and 12 is the carrot to work harder for. Some companies do some downgrade to pay less. Group 8 for experienced folks job ads started appearing recently. The bonus is up to 28% depending on the performance. So basically you can slack all day, have +5% bonus on the base salary when someone doing overnighters will have +15%. The higher bonuses are reserved for oldtimers. This system is absolutely cringe. Btw most of these unionized companies offer 35 hours contracts. 40 hours must be negotiated as a bonus… Anyway union will take care of regular base salary increase, that’s really nice. +6% for doing nothing good is amazing!
This describes payment and promotion functions in one unionized job sector in Bavaria.
Many German companies are not, in fact, unionized, and tend to pay 'übertariflich / außertariflich' - instead of union protection, they just pay you significantly more than you'd get with an union job. Which is a good thing 9 out of 10 times.
I heard during two decades only single time about small company paying significantly more. The guy is specialist. There are handful of them worldwide. He was absolutely perfect fit and super desired candidate. Never ever heard more about small companies with serious salaries. It was always -20% or -30% from union tables be it in Bavaria or Baden-Württemberg. Special case might be small establishments of American companies with great salaries, but this is different topics. And some statistics at the end: https://karriere.de/mein-geld/gehaltsstudie-diese-20-unterne...
So no, not unionized smaller companies pay less with possible exemptions for key employees.
Truth probably is somewhere between my extreme and your extreme ... but I have - in 20 years of working in the country - not once encountered a situation in which an AT-contract I saw was not significantly better than an union contract when it came to compensation - both from some of the largest - and 100% pure German - conglomerates, and in the KMU sector.
Which - incidentally - is why such companies advertise paying 'außertariflich' in their job adverts.
Situation might be different for low- and non-skilled workers.
Why is it only falling apart now? Why was a system like that able to dominate the global manufacturing economy for half a century?
The answer probably isn't American work ethics inspired by American compensation schemes, but rather Chinese ingenuity and grit. But seriously, why can you build so much on 35 hours per week and a mid-five-figure salary?
Not that hard, but also why would you want to promote based on metrics? That will get you people gaming the system, and I can't imagine a single software dev metric that actually captures the full gamut of value a dev can provide. You will surely miss very valuable devs in your metrics.
I didn't realize "performance tracking" actually just meant "blind metric tracking". Are there lots of rules to differentiate the good kinds of metrics (like actual outcomes for customers and co-workers) from the bad kinds of metrics (like time spent using an LLM)? Sounds like this is all about treating a symptom (poor business leadership) rather than a root cause (noncompetitive markets).
As companies grow, they tend to move away from subjective performance reviews like that and toward more objective metrics. Otherwise, it's too easy for personal politics to contaminate the promotion process. Employees are incentivized to find whichever manager will give them 5 star reviews no matter what, and managers are incentivized to be that guy, because then they have access to the best employees. When a company is small, and everyone knows everyone, this is not an issue. But when 90% of the company is a stranger to you, you need more objective metrics to rely on.
Even married people with kids can switch companies. Sometimes that implies a pay cut, but not always.
And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.
>And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.
I'd say that there's some room for nuance there. Tech hiring has slowed significantly, such that even people in senior roles who get laid off may be looking for a long time.
If you work for Microsoft you're not getting top tier comp already (at least as compared with many other tech companies), and then on top of that you're required to work out of a V/HCOL city. Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck who weren't having that issue a couple of years ago.
Check the prices in Seattle, SF, LA, DC, and NYC metro areas for 2-4 bedroom rentals and how they've jumped the last few years. You're looking at 35%-45% of their take home pay just on rent even before utilities. I'm not sure the math works out all that well for people trying to support a family, even with both parents working.
> Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck
If you maxed out your lifestyle relative to your income then yes, that is the case. It will always be, no matter how much you make.
It's also the case for the guy stocking the shelves at your local Walmart if he maxes out his lifestyle. But if you compare both in absolute terms, there are huge differences.
Which lifestyle you have is your choice. How big of a house, what car, where to eat, hobbies, clothes, how many kids, etc. If you max that out, fine, enjoy it. But own that it was your choice and comes with consequences, i.e., if expenses rise more than income, then suddenly your personal economy is stretched. And that's on you.
Absolutely, programmers are paid exceptionally well compared to a lot of other jobs. If they live paycheck to paycheck they are doing things wrong, especially when having family.
The hedonic treadmill really gets away from some people. I've had coworkers on 7 figures talk about how they couldn't possibly retire because the costs of living in (HCOL city) are far too high for that.
When you dig down into it, there's usually some insane luxury that they're completely unwilling to give up on.
If you're a software engineer in the United States, or in London, you can almost certainly FIRE.
Yup it's insane to me. I am a software developer in Germany making 30k (after taxes) and manage to save up 600-700€ a month while still living really good (rural area, no car).
Absolutely not enough to retire early but easily enough to not live paycheck to paycheck. Making 6 figures in the USA and not being able to afford life is so cryptic to me.
Add family and 100k after taxes in Munich will be no big deal. I could live alone in the car, but the kids might want their own rooms and their own beds.
100k is a unreachable dream for me unless I found a business myself and actually succeed. Munich is expensive, I've seen some prices there. I live near Denmark though so Munich would not be a option in the first place.
I could afford a house considering my savings rate and current appartment rent. Not a big one but it would be enough. Have no reason to buy one for myself though.
You can do over 100k if you freelance but it's risky to be a freelancer in a lot of ways in Germany. Salaries in Berlin and Munich are approaching 100k or over for leadership roles. The problem is COL in both cities is high and Berlin you basically can't get a flat anymore even if you can pay the rent on it.
> I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.
It's not just Microsoft. Other smaller employers are aping those guys.
My employer has an utterly ridiculous PowerBI dashboard tracking how much every employee uses LLM-based tools. Make sure to enable the Premium models, because otherwise you won't get credit! There are naughty lists for people whose usage is too low. Luckily the usage goals (for now) aren't very high.
However, they're also getting anal about tracking tasks, and the powers at be have asserted control over all aspects of story creation and work process. There's speculation they're going to start tracking story completion rates, and demanding measured productivity increases.
Easy way to game that would be to spam a couple of pages of unread documentation for every page of code you write. 2/3rds copilot usage, it's not critical, and documenting existing code is a much more likely to work use case for an LLM.
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see.
It’s a bit like returning to the office. If it’s such an obvious no-brainer performance booster with improved communication and collaboration, they wouldn’t have to force people to do it. Teams would chomp at the bit to do it to boost their own performance.
I don't want to wade into the actual effectiveness of RTO nor LLMs at boosting productivity, but if you buy into the claims made by advocates, it seems pretty obvious that the "in office boosts communication" claim is only true if your coworker (the other side of the conversation) is in office. Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.
Similarly, many people don't like learning new tools, and don't like changing their behavior. Especially if it's something they enjoy vs something good for the business. It's 2025 and people will have adamantly used vim for 25 years; some people aren't likely to change what they're comfortable with. Regardless of what is good for productivity (which vim may or may not be), developers are picky about their tools, and its hard to convince people to try new things.
I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment, and if "the business" must wait for them to explore and discover it on their own time, they risk forgoing profits associated with that employee's work.
> Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.
I think it could have been left to individual small teams or smaller units. After some time it would have been obvious that teams who went to the office delivered better results. If the benefits are really that obvious as they are usually touted, it shouldn't take long (a year or two).
> It's 2025 and people will have adamantly used vim for 25 years
But if they are productive and delivers results they can still use vim. I can see controlling the APIs used, the programming languages allowed etc. But if they are productive with vim, let them use vim.
> I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment,
That's fair. I guess it depends on the types of people. I had in mind motivated people who would be glad to be more productive and deliver results quicker. Yeah, if people are not like that and are trying to do the least amount of work and just coast then it's a bigger issue. Office or not office, AI or no AI probably won't shift the needle by much.
I don't see how using vim is in any way bad for business, what a terrible example. And I don't even use it myself.
Your argument also hinges on "business" knowing what is good for productivity, which they generally don't. Admittedly, neither do many programmers, else we'd have a lot less k8s.
Indeed, I detest vim but I think mentioning it detracted from the argument by showing why developers tend to not trust it when others try to dictate what is "good for the business" based on their own views rather than objective metrics.
You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.
With LLMs, I'm not so sure. Seems more like an individual activity to me. Are some people resistant to new tools, sure. But a good tool does tend to diffuse naturally. I think LLMs are diffusing naturally too, but maybe not as fast as the AI-boosters would like.
The mistake these managers are making is assuming it's a good tool for work that they're not qualified to assess.
> You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.
Fair point! It was just the first recent example of "it's obviously better but we'll force you to do it" I could think of.
In case of RTO I think it should have been left to individual small team. If one is so clearly it would have been clear in a few years time which teams work better and which didn't and how it depended on them being in the same office.
> Teams would chomp at the bit to do it to boost their own performance.
This assumes teams care more about performance than comfort and convenience. Many teams care about both. And which one wins out can vary. It can even change over time.
The question is how to incentivize what, and what methods are the most effective at doing that for a particular team at a given point in time.
> And which one wins out can vary. It can even change over time.
Well good reason to let each team pick. Teams that care about performance will eventually do a lot better. The claim with these mandate is that the benefits are "obvious". So if it's very obvious they should be visible fairly quickly.
If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping competing code that works. In that sense, the author is painting an incredibly bright picture of the future of the software industry: one where founders don't have to be particularly talented to hit the jackpot.
Saving misguided AI codebases is going to be quite lucrative for contract work I suspect.
A lot of non-technical people are going to get surprisingly far into their product without realising they are on a bad path.
It already happens now when a non-technical founder doesn't get a good technical hire.
The surprising thing for developers though, is how often a shit codebase makes millions of dollars before becoming an issue. As much as I love producing rock solid software, I too would take millions of dollars and a shit codebase over a salary and good code.
"...one where founders don't have to be particularly talented to hit the jackpot."
That's where we're at right now anyways.
"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"
And that's how we got here.
The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".
I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".
Steelmanning the "we must force tool usage" position: it's possible that a tool does increase productivity, but there's either a steep learning curve (productivity only improves after sustained usage) or network effects (most people must use it for anyone to benefit).
No opinion on whether or not this applies to the current moment. But maybe someone should try forcing Dvorak layout on everyone or something like that for a competitive edge!
I once had a boss who saw me use Vim and was really impressed with how quickly I could jump around files and make precision edits. He tried getting the other devs (not many, < 5) to use Vim too but it didn't quite pan out.
I would guess that interest, passion, and motivation all play a role here. It's kind of like programming itself. If you sit people down and make them program for awhile, some will get good at it and some won't.
> I would guess that interest, passion, and motivation all play a role here.
And, to use less pointed language, people’s brains are wired differently. What works for one doesn’t necessarily work for another, even with similar interest, passion, and motivation.
I worked with a developer that copied and pasted A LOT and would keep his fingers on the old copy and paste buttons (Ctrl-Ins, etc.). I've even seen him copy and paste single letters. He's one of the most productive developers I've ever worked with.
I was using emacs for a while, but when I switched to vim, something about the different modes just really meshed with how I thought about what I was doing, and I enjoyed it way more and stuck to it for a couple of decades.
I see people that I'd say are more proficient with their emacs, VS Code, etc setups than I am with my vim setup, so I don't think there's anything special about vim other than "it works for me".
I've had plenty of interest, passion and motivation during my career. But never, ever, directed at learning something like vim, even if it's going to make me more productive.
I'd rather learn almost any other of the myriad of topics related with software development that the quirks of an opinionated editor. I especially hate memorising shortcuts and commands.
Your old boss probably would have been a bit chastened if he knew said devs would then be spending their hours learning how to exit Vim instead of programming
There was a time where I'd change to a different terminal and do sudo killall -9 to get out vim.
And that time when I changed vim to a symlink to emacs on a shared login server and sat back and enjoyed the carnage. (I did change it back relatively quickly)
Coding agents seem to be in the fun paradox of "it's so easy to use, anyone can code!" and "using it productively is a deep skill, and we have to force people to use it so they learn"
Programming isn’t a government desk job. The interface between programmer and company should be the output only, they can’t force a programmer to use w/e bs they think is good at the time
The forcing argument has merit, it should not be forced, in fact they should say very little about how we do our work.
But the "rubber-stamp" framing is wrong, if it were true then you would not be needed at all. It's actually harder to use gen AI than to code manually. Gen AI has a rapid pace and overwhelming quantity of code you need to ensure is not broken in non-obvious ways. You need to layer constraints, tests, feedback systems for self repair and handle memories across contexts.
I recently vibe coded 100K LOC across dozens of apps, I feel the rush of power in coding agents but also the danger. At any moment they could hallucinate, misunderstand or use a different premise than you did. Going past 1000 LOC requires sustained focus, it will quickly unravel into a mess otherwise.
It is not harder if you don't care about or even understand what could go wrong. It is harder if you care and want to be as confident of this code as if it is your own hand-written code.
Feels like you are assuming everyone has your diligence and the diligence that exists in the industry isn't already rapidly decaying due to what's happening.
By some metrics (commit count, lines of code, latency to bug fix), I'm one of the most productive engineers in my org. My peers are constantly surprised that I still use zero LLMs in my development routine.
I'm holding on as much as I can. I love the thrill of programming and I'm definitely not holding the team back, although there have been idle threats about it being bad that seniors are not using AI enough.
My super power is that I've meticulously shaped the codebase I maintain to fit in my head at all times. It's only possible because I've been hands on with all of it. In my downtime I do idle refactors and often throw it away, just to explore more ideas to build even more implicit context.
I think we're going to have to have a generational shift in LLM context management before we get to something as capable. I certainly don't doubt we'll get there one day. But I will definitely feel depressed whenever that day comes. Solving architectural problems is my job, implementing the software is my reward - this is something that makes me happy to wake up every day. I accept that I'm a minority in the world of software development
Those metrics don't imply any level of productivity though. I could write 1000s of lines of code across hundreds of commits each day without delivering any value to anyone
> I am a programmer, not a rubber-stamp that approves Copilot generated code
That's just, like, you opinion?
Anyway, I don't see what the fuss is about. It's one of the two...
- Either the LLMs are very good and replace programmers. Yay, automation - we always cheer for that! Instead of working, humanity can, like, go fishing or something.
- Or the LLMs aren't that good and don't replace programmers. Yay, jobs!
Okay, but now what? Clearly, the industry is trending towards an entirely new style of doing programming. What are the longterm options going to be for those who don't enjoy this? Especially when there is a good chunk of people embracing it and adopting tools faster than any other tools for this proffesion have been adopted in the past. How will this end?
The good and bad aspect of this approach to AI in tech is that it revealed really how many developers out there are merely happy with getting something to work and get it out the door before clocking out and not actually understanding the inner workings of their code.
This is almost inevitable when something industrializes; people maximize profit by quickly shipping things that barely works. We need someone who try to excel in technology, and AI just amplifies this need.
whenever people complain about someone being "merely happy with getting something to work and get it out the door before clocking out" i wonder to myself if i'm dealing with someone that has The Protestant Ethic and the Spirit of Capitalism on their nightstand, or has never read Economic and Philosophic Manuscripts of 1844, or simply does not understand the significance of these two essays.
like ... you expect people to actually be committed to "the value of a hard day's work" for its own sake? when owners aren't committed to value of a hard day's worker? and you think that your position is the respectable/wise one? lol
Where did they say anything about a "hard day's work"? Are you making up arguments to attribute to them, lol
And are you assuming the alternative involves not clocking out? Because "clock out, finish when there's more time" is a very good option in many situations.
No, it's not about capitalism and exploitation, hard work propaganda etc. You can work to the contract (e.g. strictly whats in your work contract and not "above and beyond") while still retaining the quality of the work. So reduce the quantity but not the quality. This is about a ton of bootcamp developers that were created in the last 10ish years, for which, unlike the rest of us, it is just a better paid job.
Given the remainder of the comment is "and not understanding the inner workings" it's safe to assume that "getting something to work" does not imply that it worked correctly.
Back in the days of SVN, I'd have to deal with people who committed syntax errors, broken unit tests, and other things that either worked but were obviously broken, or just flat out didn't work.
Taking a bit of pride in your work is as much for your coworkers as it is for yourself. Not everything needs to be some silly proles vs bourge screed.
I myself are among people I would trust least to approve any code. In general I am way too trusting that others either know better or have properly thought through their work.
In scenarios were especially the later might not be true it seems like a inevitable failure. And I am not even sure any fixes will be thought trough either... Which makes me rather sceptical of whole thing.
You want some advice from a 16 years-in-industry person? Not so long, but long enough: software, as all industries, are driven by metrics.
Metrics we understand, but that managers miss to understand sometimes. You are a means to produce. With the advent of AI, some very hyped people think and wish they could get rid of programmers.
You know what I am doing in the meantime? I built a business. I am just finishing the beta deploymet test now. It can go wrong? Yes.
But otherwise, be faced to be a number, a production chain thing in the future. Besides that, when they can get rid of you, you are going to be in a bad positio to move at that time. Invest time now in an alternative strategy, if you can.
Of course, I know nothing about you so I might be totally wrong. If you already have financial safety for the rest of your life, this does not apply as hard.
I am trying to buy more freedom on my side. I already had some, but not enough. You will not be free with a manager to report to, even if you are thinking you are doing a better job than he thinks. Or even if you are objetively doing it.
They will care about delivery in a rush, politics, self-interest (this is not different from any human, but you will depend on them), etc.
I totally agree, the employer requires me to take ownership of the code I pushed to the repository. I should not be enforced to use some tool if I think that the tool does wrong.
In a larger scope, I tend to break many "rules" when I code, because I say that my experience proves against it, and this is what makes me unique. Of course nowadays, I need to convince my team to approve it, but sometimes things that are written differently are free from certain flaws that I want in this very case to avoid.
-- EDIT --
I think that this management trend comes from the bad management principles. There's a joke that a bad manager is a person who knowingly that one woman delivers a baby in 9 months, will consider that nine women deliver a baby in one month. I'd say similar principle comes in here - they were bought by the commercials on how AI makes things faster, they have put the numbers into their spreadsheet and now they expect the numbers they pay get similar to the numbers on the sheet. And if the numbers do not fit, they start pushing.
I find that my usage varies based on a cycle of: 1) Using the LLM like crazy to help refactor code, extract components for re-use to speed up things in the long run 2) The refactoring has sped up the work significantly, but now I need to pause and address technical debt that the LLM introduced.
Roles are evolving, and don't be a luddite! We're not programmers anymore, we're people someone to take the blame with the AI goes wrong. After all, you are hired to do the impossible job of reviewing all the code and catching all the mistakes. A perfect job for a machine, er, human! Embrace the change! Progress!
"Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt - though the larger narrative or news headlines next day would still be, 'AI is eating jobs'!"
I'm also reminded of that legendary old IBM quote from 1979:
"A computer can never be held accountable. Therefore a computer must never make a management decision."
I find them useful for a few tasks that I'd procrastinate on and delay anyway, but for others, it would honestly be faster for me to just apply my skills without getting to an LLM review mode. I've found that using an LLM poisons my flow in a way that is a bit draining, whereas if I just put the work in to maintain context in my head, it's faster and more enjoyable.
Things like low-risk documentation, updating and creating unit tests along with fixing them in agent mode, describing how some code works etc.. are all at least a little useful for me.
As a response to the AI negativity in the thread.
Remember that this thing is in its infancy.
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.
Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).
GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).
Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.
Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.
[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.
If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.
There's a lot to unpack here but to me your comment sort of contradicts itself. You're saying these things are in their infancy and therefore not able to produce code at the standard of a skilled software engineer. But you also seem to have an axe to grind against code review, which is fine but wouldn't that mean code review is even more important? At least right now? Which is kind of the point of the article.
> Some exec somewhere in the company decided everyone needs to be talking to AI, and they track how often you're talking with it. I ended up on a naughty list for the first time in my career, despite never having performance issues. I explain to my manager and his response is to just ask it meaningless questions.
That's not a career-switching issue, that's a company-switching issue. Most people will work for at least one company in their career where the people in charge are dickheads. If you can't work around them, go find a different company to work for. You don't have to throw away an entire career because of one asshole boss.
Also fwiw, resistance is more effective than you think. You'd be surprised how often a dickhead in charge is either A) easy to call the bluff of, or B) needs someone to show them they are wrong. If you feel like you're going to quit anyway, put your foot down and take a stand.
If people can carve out an easy / chill policing role in which all there is to do is watch dashboards many will take it. The key for orgs is to avoid making roles like this.
I don’t think AI is anywhere near the point of replacing humans yet. The main issue here is whether the use of these tools is forced or voluntary. I’ve seen quite a few companies where the boss tries to fully adopt AI productivity tools, but faces strong resistance during implementation.From the employees’ perspective, the boss might be moving too aggressively without considering the practical realities. From the boss’s perspective, it’s frustrating to see the pushback.This tension seems to be a common challenge at the current stage of AI adoption.
Just to add,many people tend to overestimate the power of AI. At least for now, vibe coding doesn’t play a significant role in building complex software.
I recently read a Stack Overflow research report showing that:“Most respondents are not vibe coding (72%), and an additional 5% are emphatic it not being part of their development workflow.”It also noted that in a future with advanced AI, the #1 reason developers would still ask another human for help is “When I don’t trust AI’s answers” (75%).This clearly shows that human developers remain the ultimate
It wont replace humans in the foreseeable future as they can not reason or react to changes they werent trained on. Bosses are jumping on a hype train making decisions in fields they barely have expertise in, which is the frustrating part. They listen to false promises of other "founders". Bosses not listening to their employees always has been a key factor to frustration at work, these businesses have no right to succeed.
I'm a solopreneur. Yesterday, in 90 minutes, I developed an enhancement that would have taken a full day before. I did it by writing a detailed Markdown spec and feeding it to Copilot Agents running Sonnet 4.5. Copilot worked on it on a server somewhere while I ate lunch.
When I returned, I reviewed the PR. It wasn't perfect; one of the dozen or so tests that Copilot had generated didn't actually test what it purported to. So, I fixed that by hand. It wasn't a big deal. It was still quicker and took less cognitive effort than writing the entire PR myself.
I'll confess that part of me is pleased to be dismissed with epithets like "AI-pilled," because properly using LLMs is an enormous competitive advantage. The more negative sentiment around them there is, the less likely the competition is to be inclined to learn and master them.
I am living this but the CEOs of my company are also "active" programmers.
Even when I already hear from them that "it helps them in language they do not know" (which is also my experience) I get frown upon if on meetings I do not say that I am "Actively using AI to GENERATE whole files of code".
I use AI as rubber duck, generate repetitive code or support me when going into an new language or technology, but as soon as I understand it, most of the code given for complete, non hobby, enterprise level projects contains either inefficient code or just plain mistakes which takes me ages to fix for new technologies.
Also, as someone who had lived through the cringeworthy programmer -> 'coder' transition (with coder originally having had a 'code monkey'-vibe), I am happy to see we're returning to adult words.
I understand which category of people you are describing, but this is what a proper programmer actually does:
- Check stackoverflow only for very niche issues, never finds what he needs but reaches a solution reading multiple answers and sometimes used to post a better solution for his issue
- Have his own templates if he does repetitive and boring stuff (common), implements the complex logic if any first and get the rest done as fast as possible being mildly disgusted.
- Imports libraries and often take a look at the code noticing stuff that could be improved. Has private forks of some popular opensource libraries that fix issues or improve performance fixing silly errors upstream. Sometimes he is allowed/has time to send the fixes back upstream. When using those libraries sometimes he finds bugs, and the first thing he does is checking the code and try to fix them directly, no tickets to the maintainers, often opens a PR with the fix directly.
Working with AI is like working with an ADHD intern that says they understand the problem but then gets distracted by every possible thing and then tries to gaslight you into thinking their mistakes are your fault.
Funny, in that just moments ago I was describing AI coder to friends as a very drunk intern high on crack.
My experience is that using AI as a fancy code completion tool works very well and saves me a lot of time.
But, trying to let it define how to do things aka vibe coding, is a recipe for endless disaster.
AI coder can do great things but it needs someone to first define the architecture and forcefully guide it in the right direction at every step. If let loose, things go haywire.
I basically just use it every now and then to summarize api docs for me, every time I try to use it to solve a real problem it just flounders in context polluted by its previous failed attempts.
I generally find the whole process to be more frustrating and time consuming than just writing the code myself.
I am not interested in entire new architectural paradigms required to enable a mediocre code ad-lib bot.
This is starting to remind me of the switch from ASM -> C or more recently, the switch from pure HTML/Javascript to frameworks.
Technology advances and if you don't follow the advancements, you will suddenly find yourself laid off with no job prospects.
This happened to my cousin last year. He's in his early 50s and was the subject matter expert on decades old technology, which nobody really uses anymore. He was laid off and when he tried to get another software development job, many interviewers laughed at him for his outdated tools and techniques he used.
He ended up getting a job as a mechanic and will most likely not get back into IT.
GPT-5: Typesetting and paste-up, film prepress/stripping, CMYK color separations, halftone screening, darkroom compositing/masking, airbrush photo retouching, optical film compositing/titling, photochemical color timing, architectural hand drafting, cartographic scribing and map lettering, music engraving, comic book lettering, fashion pattern grading and marker making, embroidery digitizing and stitching, screen-print color separations
Just yesterday I made some notes about a program I'd like to write (hobby project, to be open sourced). After that, the thought of using an LLM to turn the notes into an implementation squished the joy right out of me.
The better the code generated by LLMs get, the less there is of an incentive to say "no". Granted, we're not nearly there yet (even though media reports and zealous tech bros say otherwise).
But - and this is especially true for organizations that already had a big code quality problem before the LLMs showed up - if the interpreter / compiler accepts the code and it superficially looks like it does what it should, there is pressure to simply accept it.
Why say no when we could be done now and move on!? Rubber-stamp it and let's go! Sigh. Maybe I'm overly pessimistic, reading the raves about LLMs every day grinds me down.
They force it on people because the U.S. is great at inventing new religions and selling them to people. It does not matter if it works. It does not matter if a company has a profit.
If company A buys $10 million non-working services from company B and B buys $10 million non-working services from A, both have $10 million revenue on paper.
They also have $10 million expenses, but revenue and hype is enough to warrant a P/E of 100.
Every company that sells the hot air or whose CEO has friends who sell the hot air needs to force "AI" on its employees in order to keep up appearances. In two years, when everything crashes, they will create a new religion and sell it.
Interesting to see how programmers seems to be separating into people embracing those tools and people rejecting them. I wonder if it's linked to liking the act of coding itself vs liking the results.
Because for other workers the threat is much bigger. I'm not a painter, filmmaker or musician, but now I can make a picture, a short movie or a song. Yes it will be mediocre, but if I'm fine with mediocre I no longer need those professionals.
Programs on the other hand still need developers to make. Also, we've seen decades of tooling evolution that (1) made developers more productive (2) failed to replace developers.
> Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
because soulless code does not matter.
For other fields the result is more subjective, I don't like movies with desaturated color palette, a lot of people like them, maybe LLMs can produce new genre of movies, which people who appreciate classic films or music find it soulless, and find it sad that the peasants kind of like these films and the whole thing a risk for their careers or whole craft and the human effort in making their art work.
In code its objective, either the result work or not work, I guess you can stretch "it works" to have a different meaning that can include maintainability where it starts to get more subjective, but at the end of the day you will also can get to a point where the whole thing can collapse under its weight.
I think this is the main difference in reaction to LLMs between different fields, fields that are subjective and more to sensitive to receiver taste you can notice a rage(I think range is an overstatement) against it, while fields where the result is objective the reaction from people is simply saying it does or doesn't work.
I can't think about a more just punishment than what God is giving to programmers right now with AI generated code.
It was you who for decades did everything in your might to make programming as difficult as possible for normal people, inventing the most petty rules and complications in every place imaginable in order to protect your guild from intruders.
Being able to program does not require intelligence, it requires being able to memorize an immense number of random gotchas and grammar rules which makes code not work if you forget to honour them.
Things such as:
- If you add 2+2 your answer will be 22 unless you do it in the "right" way.
- Your code not compiling if you have "mixed tabs and spaces", and of course all text editors also hide tabs and spaces.
- Your code failing if you have forgotten a ; or a " or a ) or a } etc etc
- Having to set up a complicated development environment instead of being able to program straight away on the machine.
So at least I'm laughing while I'm praising God for punishing programmers with AI code. It helps normal people to work through your Byzantine and evil rules and empowers them to make functional programs, while you are left to tear your hair and grind your teeth when reviewing our AI-generated code.
You realize that what you write makes zero sense. Those rules will all be there regardless if you generate your code by hand or with AI. So you should ask your god to give you higher cognitive skills instead
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?
For people who are so confident (which, I'm not), it's an obvious step; developers who don't want to use it must either be luddites or afraid it'll take their jobs. Moving sales people to digital CRMs from paper files, moving accountants to accounting software from paper ledgers and journals, moving weavers to power looms, etc etc -- there would have been enthusiasts and holdouts at every step.
The PE-bro who's currently boasting to his friends that all code at a portfolio has to be written first with Claude Code and developers are just there to catch the very rare error would have been boasting to his friends about replacing his whole development team with a team that cost 1/10 the price in Noida.
Coding agents can't replace developers _right now_, and it's unclear whether scaling the current approach will allow them to at any point, but at some point (and maybe that's not until we get true AGI) they will be able to replace a substantial chunk of the developer workforce, but a significant chunk of developers will be highly resistant to it. The people you're complaining about are simply too early.
It tracks with the trend of computing being something you passively consume rather than something you do. Don't learn how anything works! Deskill yourself! Not that LLMs aren't a force multiplier.
> said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.
Really? This sounds absurd. "Instead of" means it doesn't matter how shit your work is as long as you're burning tokens? Or it doesn't matter how good your work is if you're not burning tokens? Name and shame
If you admin a google-workspace domain, you get metrics out of the box on agent usage.
I guess it's great for AI companies that they've managed to bait and switch "this will improve your productivity" to "this is how much time you're sinking into this, let's not care about if that was useful"
There are a bunch of companies out there that are tracking what percentage of their developers are using LLMs now.
I heard a rumor recently that AWS are doing this, and managers are evaluated based on what percentage of their direct reports used an LLM (an Amazon-approved model) at least once over a given time period.
What I don't like about this take is that it implies that it could be that way. It implies the LLM could do the job of writing, leaving the programmer to just approve it.
It sounds anti-LLM, but it actually helps support the illusion that LLMs can do more than they actually can.
I don't think an LLM can write serious software on its own. If it could, there would be some extraordinary evidence, but all there is are some people spreading rumours. If you ask them for simple evidence of comparable performance (like a video), they shy away or answer vaguely.
The thing is not there yet, and I understand the optimism of some, but I also must emphasize that it's not looking great for LLM coding enthusiasts right now. There's no amount of proselitism that can make up for the lack of substance in their claims. Maybe they can trick investors and some kids, but that's not going to cut it in the long run.
Therefore, this is not a problem. I don't need to worry about it. If (or when) some evidence appears, I can then worry about it. This hasn't happened yet.
>why not just keep it voluntary, why force it on people?
People hate learning new tools, even if they are more efficient. People would rather avoid doing things than learning a tool to do it efficiently.
Even in this thread you can see simeone who is / was a Vim holdout. But the improvement from Vim to IDE will be a fraction of the difference compared to AI integrated IDEs.
Most companies I've worked with don't care if you use vim or an IDE.
I've worked with people using vim who wildly outproduce full teams using IDEs, and I have a strong suspicion that forcing the vim person to use an IDE would lower their productivity, and vice versa
>I've worked with people using vim who wildly outproduce full teams using IDEs
This is not due to the editor. Vim is not a 20x productivity enhancer.
>forcing the vim person to use an IDE would lower their productivity
Temporarily, sure. But there productivity should actually go up after they are used to it. This idea of wanting to avoid such a setback and avoiding change is what keeps people on such an outdated workflow.
Did people force React? Cloud infrastructure? Microservices? You get it.
I know there are people still using PHP 5 and deploying via FTP, but most people moved on to be better professionals and use better tools. Many people are doing this to AI, too, me included.
The problem is that some big companies and influential people treat AI as a silver bullet and convince investors and customers to think the same way. These people aren't thinking about how much AI can help people be productive. They are just thinking about how much revenue it can give until the bubble pops.
> Did people force React? Cloud infrastructure? Microservices? You get it.
Actually, yes; People forced React (instead of homegrown or different options) because its easier to hire to, than finding js/typescript gurus to build your own stuff.
People forced cloud infrastructure; even today, if your 10-customer startup isn't using cloud at some capacity and/or kubernetes, investors will frown on you; devops will look at you weird (what? Needing to understand inner workings of software products to properly configure them?)
Microservices? Check. 5 years ago, you wouldn't even be hired if you skipped microservices; everyone thinks they're gooogle, and many startups need to burn those aws credits; thats how you get a dozen-machine cluster to run a solution a proper dev would code in a week and could run on a laptop.
Forcing react, cloud infra and microservices makes a lot more sense than forcing certain development tools. One is the common system you work, the other is what you use to essentially edit text.
Its basically the same. It abstracts away a layer of complexity, so you focus on different stuff. The inherent disadvantage of using these shortcuts/abstractions is only obvious if you actually understand their inner workings and their shortcomings - being cloud services or llm-generated code.
Today you have "frontend programmers" that couldn't implement a simple algorithm even if their life depended on it; thats not necessarily bad - it democratizes access to tech and lowers the entry bar. These devs up in arms against ai tools are just gatekeepers - they see how easy is to produce slop and feel threatened by it. AI is a tool; in most cases will improve the speed and quality of your work; in some cases, it wont. Just like everything else.
I see where you’re coming from, but there’s a small difference. Coding itself is mostly a routine tasks, turning ideas into working code. Humans really stand out in the important parts:creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t. AI can help with the routine work, but the creative and thinking parts are still human.And this is exactly where developers should focus and evolve themselves.
> creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t.
Are you aware that there are people that think that even now AI can do everything you describe?
The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.
When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.
A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.
So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.
People often blame LLMs for bad code, but the real issue is usually poor input or unclear context. An LLM can produce weak code if you give weak instructions but it can also write production ready code if you guide it well, explain the approach clearly, and mention what security measures are needed. The same rule applies to developers too. I’m really surprised to see so much resistance from the developer community, instead, they should use AI to boost their productivity and efficiency. Personally Iam dead against using CLI tools, istead IDE based tools will give your better visibility on code produced and betetr control over the changes.
They said the same thing about the loom. "I'm an artist, no machine can replace me!" Now it's all done by machine, and none of us worry about it. We're in the early stages of the same process with AI; history rhyme.
That may be the case some day, but I don't think it's going to happen with LLMs. They get too many things wrong via hallucinations (likely unfixable) and often they can go deep into an (incorrect) rabbit hole burning a ton of tokens at the same time.
Useful tools, but I think the idea that they'll replace programmers is (wishful? eek) thinking.
Yup.. took the loom 200 years, and it won't be overnight for AI either. But it will eat away at the edges, and do the simple things first. It already is, for those who embrace it.
High quality hand made clothes still exist and people do want to pay for them. Mass produced clothing made in sweats shops are what the majority of the people buy because that is where the capitalist companies drove the production.
The loom dropped production costs immensely - even hand-made clothes are done with premade fabrics, they dont do it from scratch.
Mass produced clothing exists in many industrialized countries - typically the premium stuff; the sweatshop stuff is quite cheaper, and customers are happy paying less; its not capitalism, its consumer greed. But nice story.
I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten. Fine for throwaway or exploratory work, but heaven help you if you're working in a project where people use LLMs to "fix" bugs generated by previous LLM generated code.
So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked. It's like a game of hot potato, if you want to game the metrics you better be the one working on greenfield code (although I suppose maintenance work has never been looked at favorably in performance review; but now the cycle of code rot is accelerated)
Im working on some website and created some custom menu. Nothing fancy. AI got it done after some tries and I was happy as web development is not my area of expertise. After some time I realized the menu results to scrolling when it shouldn’t and wanted to make the parent container expand. This was impossible as the AI did a rather unusual implementation even for such a limited use case. Best part: my task now is impossible to solve with AI as it doesn’t really get its own code. I resulted to actually just looking into CSS and the docs and realized there is a MUCH simpler way to solve all of my issues.
Turns out sometimes the next guy who has to do maintenance is oneself.
> Turns out sometimes the next guy who has to do maintenance is oneself.
Over the years I've been well-served by putting lots of comments into tickets like "here's the SQL query I used to check for X" or "an easy local repro of this bug is to disable Y", etc.
It may not always be useful to others... but Future Me tends to be glad of it when a similar issue pops up months later.
On the same boat, I've learnt to leave breadcrumbs for the future quite a long time ago, and it's paid off many, many times.
After it becomes second-nature is really relaxing to know I have left all the context I could muster around, comments in tickets, comments in the code referencing a decision, well-written commit messages for anything a little non-trivial. I learnt that peppering all the "whys" around is just being a good citizen in the codebase, even if only for Future Me.
Agree completely, while "what" is completely redundant most of the time, a couple of "why"s can be of immense help later, to oneself and others.
> it doesn’t really get its own code
It doesn’t really get its own anything, as it is unable to "get". It's just a probabilistic machine spitting out the next token
"Getting things" is a matter of performance, not about the underlying hardware. If I'm an idiot who knows nothing about programming, but every time I slam the keyboard we get good programs, then how useful is it to discuss whether I am in fact empty-headed?
I guess in your arbitrary hypothetical it wouldn't be useful
But the discussion here is that it does not output good programs at all.
So we might discuss their performance along a gradient and think on their year over year improvement. Current industry performance is of such magnitude that it has persuaded the world to adopt ChatGPT workflows as much as they have. Adjacent to code, one might look to Terry Tao and how he relates to ML workflows in math.
Hey, I think everyone understands how they work by now and the pedantry isn't helpful.
actually I'm not sure everyone does
Its a tale worth repeating because a minuscule percentage of people know or pretend to know how it works. Our view might be a bit skewed here on hackernews but normal people believe llms are thinking machines.
Then if it can't really reason on its own creation how do you expect it to be correct in what it does if it's simply regurgitating code parsed online?
This is pretty much how permanent staff often have to work with consultants/contractors or job-hoppers in some sectors.
Shiny new stuff quickly produced, manager smiles and pays, contractor disappears, heaven help the poor staffers who have to maintain it.
It's not new, just in a new form.
I love this analogy of consultants parachuting into a project, doing the bare minimum, and passing the baton to the next person to maintain the mess.
Leadership is buying into the hype and busy turning everyone into overzealous slopmongers. Companies are pushing an “AI Fluency” framework where people are encouraged (read: forced) to use LLMs and agentic coding in every aspect of development.
Don't ignore the difference in scale though. Something happening some of the time isn't the same as happening most of the time.
Yeah, LLMs are easier to keep available ;)
This misallignment of incentives is why we have shitty software in everyday life
What's new though is that now you can do it to your future self!
In my experience, AI generated code is much higher quality than code written by external service companies. For example it will look at your code base and follow the style and conventions.
Style can conventions are very superficial properties of code. The more relevant property is how many bugs are lurking below the surface.
Style conventions have a real impact on how effectively bugs are found.
The actual design of the solution has a way bigger impact on the amount of bugs to be found in the first place
this just means the bugs it creates are better camouflaged
A while back someone made a post or comment about how they managed to vibe code a huge PR (1000+ of lines) to an open source project. They said they didn’t have time to read through the code but instead used tests to ensure the code was doing the right thing. Then it came out that there was a very lengthy review period where the maintainers had gone through the PR and helped fix the (rather significant) issues with it. So while the authors “didn’t have time” to review their own work the burden was shifted onto the maintainers.
This has been described a lot as “workslop”, work that superficially looks great but pushes the real burden on the receiver of the work rather than the producer.
That sounds more like an organizational problem. If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible. Previously that took the form of copying cheap templates, copying and pasting code from StackOverflow as-is without adjustments, not caring about style, using tools to autogenerate bindings, and so on. I remember a long time ago I took over a web project that a freelancer had worked on, and when I opened it I saw one large file of mixed python and HTML. He literally just copied and pasted whole html pages into the render statements in the server code.
The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.
AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.
>If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible.
I never did this when I was a freelancer.
Yes, this is it. The idea that LLMs somehow write this deceptive code that magically looks right but isn't is just silly. Why would that be the case? If someone finds they are good at writing code (hard to define of course but take a "measure" like long term maintainability for example) but they fail to catch bad code in review it is just an issue with their skill. Reviewing code can be trained just as writing code can be. A good first step might be to ask oneself: "how would I have approached this".
> So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked.
Offhand anecdote, 1990s
That reminds me of when the German corporation my mother worked for moved more and more production to China end of last century. All the failures that the still existing German factory had to handle by repairing them ended up in their accounts. From the top bosses' point of view, just looking at the accounting data, the China production looked clean.
Of course, unsurprisingly (with enough effort), they made it work over the years, fulfilling the prophecy. Good for China.
How you account for things shifts the narrative and then reality follows the investments made based on that.
One of the things about AI generally is it doesn't "save" work - it pushes work from the one who generates the work to the person who has to evaluate it.
> I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten.
This describes most projects I've been on where there wasn't a thorough RFC process?
Where I'm seeing the sweet spot right now:
1. Have a detailed RFC
2. Ticket out the work
3. Feed the RFC and ticket to the LLM via MCP
4. Have your refactoring and design patterns textbooks close at hand so you can steer the LLM properly when things start to get messy. "DRY this out" or "implement this using X pattern" tend to be highly effective prompts
5. Use agents or other LLMs to review code for RFC compliance, test coverage, etc. (this isn't as effective as I'd like it to be right now, skill issue probably)
6. When there are bugs, force the LLM to do TDD - say "We're observing a bug in production, here are the reproduction steps, write a failing test that covers this code path." Obviously check that the test is a real test and not slop. Then, prompt the LLM to fix the isue.
I've decided to fight it the same way I fight tactical tornadoes - by leaving those people negative reviews at mid-year review.
(I also find the people who simply paste LLM output to you in chat are the much bigger evil)
have you tried... talking to them, instead of permanently hirting their chances of staying employed in a shit economy?
its great for you principles - perfect job security, sitting up on your thrones casting judgement on entry level staffers that are forced to use LLM code to make a fast impact. maybe try teaching your juniors how to do it the right way, rather than passive aggressively impacting someones physical safety net. shame on all of you assholes.
My guy, obviously I tell the person they're pulling a dick move first.
We're not talking some entry level staffers here, it's senior engineer FTEs who are employed at a tech company who are doing this.
I'm sort of reminded of the south park movie.
They kept repeatedly getting an NC-17 from the MPAA and kept on resubmitting it (6 times) until just before release when they just relented, gave it an R and released it as-is.
https://en.wikipedia.org/wiki/South_Park:_Bigger,_Longer_%26...
They didn’t just keep resubmitting it. The first four times, changes were made (mostly around language and Saddam Hussein/the Devil). The final time, they felt the changes were arbitrary (it was just more language censoring) and so an exec told the board to hurry up as they had a release around the corner and it was just silently rubber-stamped.
Aren’t junior engineers the same way? Give them an assignment and see what they turn in looks good because they made sure it worked. But then it has to be frequently rewritten?
We can yell at junior engineers and make them fix the problems they've created. You can't yell at an AI, and it can't understand that it's created any problems.
I'd say is a change of paradigm, and it might be even faster if you have test-driven development... Imagine writing your tests manually, getting LLM code, trying to pass the tests, done.
Of course, golden rules are 1. write the tests yourself, don't let the LLM write them for you and 2. don't paste this code directly on the LLM prompt and let it generate code for you.
In the end it boils down to specification: the prompt captures the loosely-defined specification of what you want, LLM spouts something already very similar to what you want, tweak it, test it, off you go.
With test driven development this process can be made simpler, and other changes in other parts of the code are also checked.
I don't care and I'm sick of these arguments.
Yes, you're 100% right, but A is always responsible for his output, and if the output's crap then he should either step up or refresh his CV.
If any of my colleagues (or me), tried to use such a card the road to unemployment would be a quick one.
It's yet another example of "don't be the last one holding the bag."
The worst part of AI is the way it's aggressively pushed. Sometimes I have to turn off AI completions in the IDE just because it becomes extremely aggressive in showing me very wrong snippets of code in an incredibly distracting way. I hope when the hype dies down the way these tools are pushed on us in a UX sense is also dialed down a bit.
I feel you. I totally disabled AI completions as they actually were often sidelining me from my reasoning.
It is like having an obnoxious co-worker shoving me to the side everytime i type a new line and complete a whole block of code and asking me if it is good without regards to how many times I rejected those changes.
I still use AI, but favor a copy paste flow where I at least need to look at what i am copying and locating the code I am pasting to. At least i am aware of the methods or function names and general code organization.
I also ask for small copy paste changes so that I keep it digestible. A bonus point is that ChatGPT in firefox when the context gets too big, the browser basically slowsdown locks and it works as a form extra sense that the context window is too big and LLM is about to start saying non-sense.
That said AI, is an amazing tool for prototyping and help when out of my domain of expertise.
So one really big thing that can make the AI autocomplete super useful is to follow the old method from "Code Complete", Pseudocode Programming Process (PPP).
Write a comment first on what you intend to do, then the AI generally does a good job auto-completing below it. I mean you don't have to "sketch everything out", but just that the AI is using the page as context and the comment just helps disambiguate what you want to do and it can autocomplete significant portions when you give it a nudge with the comment.
I've almost fully converted to agentic coding, but when I was using earlier tools, this was an extremely simple method to get completions to speed you up instead of slow you down.
Indeed. That’s my only interaction with AI coding.
Every time Visual Studio updates, it’ll turn back on the thing that shoves a ludicrously wrong, won’t even compile, not what I was in the middle of doing line of garbage code in front of my cursor, ready to autocomplete in and waste my time deleting if I touch the wrong key.
This is the thing that Microsoft thinks is important enough to be worth burning goodwill by re-enabling every few weeks, so I’m left to conclude that this is the state of the art.
Thus far I haven’t been impressed enough to make it five lines of typing before having to stop what I’m doing and google how to turn it off again.
Have you considered using another IDE?
The worse is when writing comments. I'm writing a comment such as "Doing X because..." and it never get it right.
I'm making a comment precisely because it's not obvious when reading the code, and the AI will make up some generic and completely wrong reason.
That's how you know your comments are useful. If they could've been easily guessed, they'd be redundant
I really get irritated when AI is opt out. Opt out is not consent.
Does big tech understand consent?
[ ] Yes
[ ] Maybe later
[ ] Yes
[ ] ~~No~~ (requires Premium subscription)
[ ] Use recommended settings
I disabled the inline auto suggestions. It’s like the tech version of that annoying person who interrupts every sentence with the wrong ending.
Agents are great (in so far the models are able to complete the task). Autocomplete copilot just feels like bad UX. It's both, not super effective and also disruptive to my thinking.
I think it depends on the context. If I've been writing the same language and frameworks and code solidly for a few months, then autocomplete gets in the way. But that rarely happens, I like to keep trying and learning new things.
If I'm familiar with something (or have been) but not done it in a while, 1 - 2 line autocomplete saves so much time doing little syntax and reference lookups. Same if I'm at that stage of learning a language or framework where I get the high level concepts, principals, usescases and such, but I just haven't learned all the keywords and syntax structures fluently yet. In those situations, speedy 1 - 2 line AI autocomplete probably doubles the amount of code I output.
Agents is how you get the problems discussed in this thread: code that looks okay on the surface, but falls apart on deeper review, whereas 1 - 2 line autocomplete forces every other line or 2 to be intentional.
On VS you can change that to only come up if you do a key shortcut.
For those on VS, this is how to hide it, if using 17.14 or later,
https://learn.microsoft.com/en-us/visualstudio/ide/copilot-n...
The most annoying is when I'm trying to think through a data structure. While I'm trying to deeply think through every member of a class, its type, relationships, etc., this zealous fellow acts like a toddler that knows no way to stay shut unless snoozed off.
"AI" autocomplete has become rather like mosquitos buzzing around my head that I have to constantly swat away. I'm likely to shut it all off soon, it's just become irritating.
That's why I don't use it in my editor and only through CLI coding agents.
JetBrains IDEs have an option to enable AI inline suggestions on demand via a keypress. I really like it. It saves some "boring" typing, while not being annoying.
I'm pretty sure Cursor also has something similar?
Yeah, it’s just horribly wrong in my experience and a complete distraction. Code completion for functions in the project is another story and that has been around for ages.
With emacs I love the github copilot auto suggestions. Light gray. Either accept it with Ctrl-Tab or ignore it.
My little experience with AI coding, using copilot on Eclipse, was mixed... Context: I work with an old Java source code that uses Servlets and implements his own web framework. There is a lot of code without tests or comments.
The autocomplete, I find it useful. Specially doing menial, very automatic stuff like moving stuff when I refactor long methods. Even the suggestions of comments looks useful. However, the frequency with it jumps it's annoying. It needs to be dialed down somehow (I can only disable it). Plus, it eats the allowed autocomplete quota very quickly.
The "agent" chat. It's like tossing a coin. I find very useful when I need to write a tests for a class that don't have. At least, allows me to avoid writing the boiler player. But usually, I need to fix the mocking setup. Another case when it worked fine, it's when helped me to fix a warning that I had on a few VUE2 components. However, in other instances, I saw miserable falling to write useful code or messing very bad with the code. Our source code is in ISO8859-1 (I asked many times to migrate it to UTF8), and for some reason, sometimes Copilot agent messes the encoding and I need to manually fix all the mess.
So... The agent/chat mode, I think that could be useful, if you know in what cases it would do it ok. The autocomplete is very useful, but needs to be dialed down.
"If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?" To answer this question: To justify the investment.
No, for real, LLM solutions costs a shitload of money, and every investment needs to be justified on a management level. That's the reason they are enforcing it.
My bigger problem is that there are a whole lot of "developers" who do not read the generated code properly, why do you end up in review sessions where the developer does not know what is happening and why the code acts in a particular way. And we have not yet discussed clean code principles throughout the whole solution...
I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.
And that's why performance tracking is prohibited in countries where unions still have a bit of power.
And why those countries tend to have barely any growth in their economies (i.e. europe).
The economy is supposed to serve us, not the other way, there is no pride in being a slave, it's not the flex you think it is lol.
Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy
> Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy
that is all you heard in the 80-90s, people over the pond showing off how many hours per week they worked. like... how is that something to be proud of? So wauw, you spend 12hrs+ per day working , had no free evenings, zero paid holidays. And that is supposed to impress who?
please.
What happened in the 80s too is that politics told us automation would bring a 3 days work week, which never materialized. But now we have to trust the same people, moved by the same greed, that this time it'll be different
40 years on, how many of them truly are the same people?
They're cloning them in the same schools, that's why we have 30 years old full blown boomers
It's the same breed of people, bred from the same system, not literally the same people but the people following on the footsteps. Greed is the constant.
Hell, it has been going much longer than only 40 years ago, on "In Praise of Idleness" Bertrand Russell talked about how industrialisation with its automation should be helping workers to work less 90 years ago.
It is ok, I earn enough to pay my bills, the ones from my family, a bit of travelling around and healthcare.
Usually over here we don't dream of making it big with big villas and a Ferrari on the garage, we work to live, not live to work.
France and UK are in giant fiscal crises. The German economy is in the toilet with no hope in sight. All of them have seen large deterioration of the quality of health care in the last decade. The EU leaders care more about Ukraine and destroying all privacy than any economic reform.
The irony of promoting performance tracking at employee level and criticizing the EU for destroying privacy
How are those tarifs working out?
I am european lol.
US GDP growth in the first half of 2025 was 0.1% when you take out NVIDIA and OpenAI sending the same $100 billion back and forth to each other
and as we all know, economic growth is the only good thing in life
Korea has strong worker protections and unions. Not on tracking, but in general.
Low union membership though.
No? Unless you don't count intra-company worker committees. Those fulfill much of the same role and are mandatory above a (relatively low) company size.
Which may be related to unions having been actively persecuted (to the extend of actual state-sanctioned torture, disappearences, and bona-fide massacres against people involved with unions - and people living next door. With active support of US civilian and military leadership, that is).
Google Gwangju.
Which seems to be a great thing for liveability and happiness metrics across the board.
Not when your economy is reduced to mass importing of third world labor to keep salaries down and your economy going (i.e. Italy, France, UK and Germany).
As if it is not happening in usa.
You do realize that a lot of people in EU are working 4 days a week? Are you aware that McDonalds employee can live by having only one job?
EU decided to distribute the productivity benefits instead of hoarding it into stock market gains like US does.
Btw, you do realize that US commodified investing in Us stocks? Whole world can easily invest in US stock market. Basically, instead of taking care of their own citizens, US economy is paying out gains to foreigners.
Yeesh. Prohibited? Then how do you decide who gets a promotion? At random?
There are no real promotions. It‘s about employment duration. In Bavaria you have like 12 salary groups. For white collar workers 9 is entry level, 10 is for some experience, 11 for experienced and 12 is the carrot to work harder for. Some companies do some downgrade to pay less. Group 8 for experienced folks job ads started appearing recently. The bonus is up to 28% depending on the performance. So basically you can slack all day, have +5% bonus on the base salary when someone doing overnighters will have +15%. The higher bonuses are reserved for oldtimers. This system is absolutely cringe. Btw most of these unionized companies offer 35 hours contracts. 40 hours must be negotiated as a bonus… Anyway union will take care of regular base salary increase, that’s really nice. +6% for doing nothing good is amazing!
This describes payment and promotion functions in one unionized job sector in Bavaria.
Many German companies are not, in fact, unionized, and tend to pay 'übertariflich / außertariflich' - instead of union protection, they just pay you significantly more than you'd get with an union job. Which is a good thing 9 out of 10 times.
I heard during two decades only single time about small company paying significantly more. The guy is specialist. There are handful of them worldwide. He was absolutely perfect fit and super desired candidate. Never ever heard more about small companies with serious salaries. It was always -20% or -30% from union tables be it in Bavaria or Baden-Württemberg. Special case might be small establishments of American companies with great salaries, but this is different topics. And some statistics at the end: https://karriere.de/mein-geld/gehaltsstudie-diese-20-unterne... So no, not unionized smaller companies pay less with possible exemptions for key employees.
Truth probably is somewhere between my extreme and your extreme ... but I have - in 20 years of working in the country - not once encountered a situation in which an AT-contract I saw was not significantly better than an union contract when it came to compensation - both from some of the largest - and 100% pure German - conglomerates, and in the KMU sector.
Which - incidentally - is why such companies advertise paying 'außertariflich' in their job adverts.
Situation might be different for low- and non-skilled workers.
Why is it only falling apart now? Why was a system like that able to dominate the global manufacturing economy for half a century?
The answer probably isn't American work ethics inspired by American compensation schemes, but rather Chinese ingenuity and grit. But seriously, why can you build so much on 35 hours per week and a mid-five-figure salary?
Not that hard, but also why would you want to promote based on metrics? That will get you people gaming the system, and I can't imagine a single software dev metric that actually captures the full gamut of value a dev can provide. You will surely miss very valuable devs in your metrics.
I didn't realize "performance tracking" actually just meant "blind metric tracking". Are there lots of rules to differentiate the good kinds of metrics (like actual outcomes for customers and co-workers) from the bad kinds of metrics (like time spent using an LLM)? Sounds like this is all about treating a symptom (poor business leadership) rather than a root cause (noncompetitive markets).
You have human managers discussing with their team (instead of human-decided metrics that cannot see the full picture)
As companies grow, they tend to move away from subjective performance reviews like that and toward more objective metrics. Otherwise, it's too easy for personal politics to contaminate the promotion process. Employees are incentivized to find whichever manager will give them 5 star reviews no matter what, and managers are incentivized to be that guy, because then they have access to the best employees. When a company is small, and everyone knows everyone, this is not an issue. But when 90% of the company is a stranger to you, you need more objective metrics to rely on.
Even married people with kids can switch companies. Sometimes that implies a pay cut, but not always.
And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.
>And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.
I'd say that there's some room for nuance there. Tech hiring has slowed significantly, such that even people in senior roles who get laid off may be looking for a long time.
If you work for Microsoft you're not getting top tier comp already (at least as compared with many other tech companies), and then on top of that you're required to work out of a V/HCOL city. Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck who weren't having that issue a couple of years ago.
Check the prices in Seattle, SF, LA, DC, and NYC metro areas for 2-4 bedroom rentals and how they've jumped the last few years. You're looking at 35%-45% of their take home pay just on rent even before utilities. I'm not sure the math works out all that well for people trying to support a family, even with both parents working.
> Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck
If you maxed out your lifestyle relative to your income then yes, that is the case. It will always be, no matter how much you make.
It's also the case for the guy stocking the shelves at your local Walmart if he maxes out his lifestyle. But if you compare both in absolute terms, there are huge differences.
Which lifestyle you have is your choice. How big of a house, what car, where to eat, hobbies, clothes, how many kids, etc. If you max that out, fine, enjoy it. But own that it was your choice and comes with consequences, i.e., if expenses rise more than income, then suddenly your personal economy is stretched. And that's on you.
it's interesting watching people work so hard to not feel empathy
For the H1Bs, I've heard that it's a nightmare.
Depends on the job market on their area.
Absolutely, programmers are paid exceptionally well compared to a lot of other jobs. If they live paycheck to paycheck they are doing things wrong, especially when having family.
The hedonic treadmill really gets away from some people. I've had coworkers on 7 figures talk about how they couldn't possibly retire because the costs of living in (HCOL city) are far too high for that.
When you dig down into it, there's usually some insane luxury that they're completely unwilling to give up on.
If you're a software engineer in the United States, or in London, you can almost certainly FIRE.
Yup it's insane to me. I am a software developer in Germany making 30k (after taxes) and manage to save up 600-700€ a month while still living really good (rural area, no car).
Absolutely not enough to retire early but easily enough to not live paycheck to paycheck. Making 6 figures in the USA and not being able to afford life is so cryptic to me.
Add family and 100k after taxes in Munich will be no big deal. I could live alone in the car, but the kids might want their own rooms and their own beds.
100k is a unreachable dream for me unless I found a business myself and actually succeed. Munich is expensive, I've seen some prices there. I live near Denmark though so Munich would not be a option in the first place. I could afford a house considering my savings rate and current appartment rent. Not a big one but it would be enough. Have no reason to buy one for myself though.
You can do over 100k if you freelance but it's risky to be a freelancer in a lot of ways in Germany. Salaries in Berlin and Munich are approaching 100k or over for leadership roles. The problem is COL in both cities is high and Berlin you basically can't get a flat anymore even if you can pay the rent on it.
In my native language we have the saying, “Do good to the devil, and he’ll repay you with hell.” I don’t know if there is an English equivalent.
> I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.
It's not just Microsoft. Other smaller employers are aping those guys.
My employer has an utterly ridiculous PowerBI dashboard tracking how much every employee uses LLM-based tools. Make sure to enable the Premium models, because otherwise you won't get credit! There are naughty lists for people whose usage is too low. Luckily the usage goals (for now) aren't very high.
However, they're also getting anal about tracking tasks, and the powers at be have asserted control over all aspects of story creation and work process. There's speculation they're going to start tracking story completion rates, and demanding measured productivity increases.
Easy way to game that would be to spam a couple of pages of unread documentation for every page of code you write. 2/3rds copilot usage, it's not critical, and documenting existing code is a much more likely to work use case for an LLM.
I mean, nobody reads documentation anyway
Why feel bad? They signed up for that. There is no reason to feel bad for people who enter into voluntary contracts willingly.
Personally I want my MSFT position to increase, so I’m cool with whatever the company does to increase the share price.
Feel bad because you have empathy?
Or perhaps that's the problem, lacking it.
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see.
It’s a bit like returning to the office. If it’s such an obvious no-brainer performance booster with improved communication and collaboration, they wouldn’t have to force people to do it. Teams would chomp at the bit to do it to boost their own performance.
I don't want to wade into the actual effectiveness of RTO nor LLMs at boosting productivity, but if you buy into the claims made by advocates, it seems pretty obvious that the "in office boosts communication" claim is only true if your coworker (the other side of the conversation) is in office. Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.
Similarly, many people don't like learning new tools, and don't like changing their behavior. Especially if it's something they enjoy vs something good for the business. It's 2025 and people will have adamantly used vim for 25 years; some people aren't likely to change what they're comfortable with. Regardless of what is good for productivity (which vim may or may not be), developers are picky about their tools, and its hard to convince people to try new things.
I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment, and if "the business" must wait for them to explore and discover it on their own time, they risk forgoing profits associated with that employee's work.
> Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.
I think it could have been left to individual small teams or smaller units. After some time it would have been obvious that teams who went to the office delivered better results. If the benefits are really that obvious as they are usually touted, it shouldn't take long (a year or two).
> It's 2025 and people will have adamantly used vim for 25 years
But if they are productive and delivers results they can still use vim. I can see controlling the APIs used, the programming languages allowed etc. But if they are productive with vim, let them use vim.
> I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment,
That's fair. I guess it depends on the types of people. I had in mind motivated people who would be glad to be more productive and deliver results quicker. Yeah, if people are not like that and are trying to do the least amount of work and just coast then it's a bigger issue. Office or not office, AI or no AI probably won't shift the needle by much.
I don't see how using vim is in any way bad for business, what a terrible example. And I don't even use it myself.
Your argument also hinges on "business" knowing what is good for productivity, which they generally don't. Admittedly, neither do many programmers, else we'd have a lot less k8s.
Indeed, I detest vim but I think mentioning it detracted from the argument by showing why developers tend to not trust it when others try to dictate what is "good for the business" based on their own views rather than objective metrics.
You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.
With LLMs, I'm not so sure. Seems more like an individual activity to me. Are some people resistant to new tools, sure. But a good tool does tend to diffuse naturally. I think LLMs are diffusing naturally too, but maybe not as fast as the AI-boosters would like.
The mistake these managers are making is assuming it's a good tool for work that they're not qualified to assess.
> You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.
Fair point! It was just the first recent example of "it's obviously better but we'll force you to do it" I could think of.
In case of RTO I think it should have been left to individual small team. If one is so clearly it would have been clear in a few years time which teams work better and which didn't and how it depended on them being in the same office.
I’m lazy. I’d rather work from home even if the office is more productive because it’s easier for me to not have to go to the office.
If the AI tools actually worked how they are marketed I’d use them because that’s less work for me to have to do. But they don’t.
> Teams would chomp at the bit to do it to boost their own performance.
This assumes teams care more about performance than comfort and convenience. Many teams care about both. And which one wins out can vary. It can even change over time.
The question is how to incentivize what, and what methods are the most effective at doing that for a particular team at a given point in time.
> And which one wins out can vary. It can even change over time.
Well good reason to let each team pick. Teams that care about performance will eventually do a lot better. The claim with these mandate is that the benefits are "obvious". So if it's very obvious they should be visible fairly quickly.
There are psychological barriers to using a tool that diminishes the work you previously thought was complex.
That's assuming this is most people's objective when they are at work.
And even if it was, that's also assuming this benefit would be superior to the benefit of remote work for the individual.
If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping competing code that works. In that sense, the author is painting an incredibly bright picture of the future of the software industry: one where founders don't have to be particularly talented to hit the jackpot.
Saving misguided AI codebases is going to be quite lucrative for contract work I suspect.
A lot of non-technical people are going to get surprisingly far into their product without realising they are on a bad path.
It already happens now when a non-technical founder doesn't get a good technical hire.
The surprising thing for developers though, is how often a shit codebase makes millions of dollars before becoming an issue. As much as I love producing rock solid software, I too would take millions of dollars and a shit codebase over a salary and good code.
"...one where founders don't have to be particularly talented to hit the jackpot."
That's where we're at right now anyways.
"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"
And that's how we got here.
The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".
I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".
Yes this is only bad news if you are working for morons.
Unfortunately a lot of people are in that situation. You can basically forget about disruption. Meritocracy is dead, long live the Peter principle.
Unfortunately, companies will force their employees to use shitty software anyway. See Microsoft Teams.
Steelmanning the "we must force tool usage" position: it's possible that a tool does increase productivity, but there's either a steep learning curve (productivity only improves after sustained usage) or network effects (most people must use it for anyone to benefit).
No opinion on whether or not this applies to the current moment. But maybe someone should try forcing Dvorak layout on everyone or something like that for a competitive edge!
I once had a boss who saw me use Vim and was really impressed with how quickly I could jump around files and make precision edits. He tried getting the other devs (not many, < 5) to use Vim too but it didn't quite pan out.
I would guess that interest, passion, and motivation all play a role here. It's kind of like programming itself. If you sit people down and make them program for awhile, some will get good at it and some won't.
> I would guess that interest, passion, and motivation all play a role here.
And, to use less pointed language, people’s brains are wired differently. What works for one doesn’t necessarily work for another, even with similar interest, passion, and motivation.
I worked with a developer that copied and pasted A LOT and would keep his fingers on the old copy and paste buttons (Ctrl-Ins, etc.). I've even seen him copy and paste single letters. He's one of the most productive developers I've ever worked with.
>I've even seen him copy and paste single letters.
Hopefully not C and V..
I agree with this.
I was using emacs for a while, but when I switched to vim, something about the different modes just really meshed with how I thought about what I was doing, and I enjoyed it way more and stuck to it for a couple of decades.
I see people that I'd say are more proficient with their emacs, VS Code, etc setups than I am with my vim setup, so I don't think there's anything special about vim other than "it works for me".
I've had plenty of interest, passion and motivation during my career. But never, ever, directed at learning something like vim, even if it's going to make me more productive.
I'd rather learn almost any other of the myriad of topics related with software development that the quirks of an opinionated editor. I especially hate memorising shortcuts and commands.
Your old boss probably would have been a bit chastened if he knew said devs would then be spending their hours learning how to exit Vim instead of programming
There was a time where I'd change to a different terminal and do sudo killall -9 to get out vim.
And that time when I changed vim to a symlink to emacs on a shared login server and sat back and enjoyed the carnage. (I did change it back relatively quickly)
If learning how to exit Vim takes hours then they aren't worth keeping as employees anyway.
Yep, there is a how-to displayed when you enter vim without opening any files.
Vim's learning curve is much steeper to be honest
Coding agents seem to be in the fun paradox of "it's so easy to use, anyone can code!" and "using it productively is a deep skill, and we have to force people to use it so they learn"
Programming isn’t a government desk job. The interface between programmer and company should be the output only, they can’t force a programmer to use w/e bs they think is good at the time
The forcing argument has merit, it should not be forced, in fact they should say very little about how we do our work.
But the "rubber-stamp" framing is wrong, if it were true then you would not be needed at all. It's actually harder to use gen AI than to code manually. Gen AI has a rapid pace and overwhelming quantity of code you need to ensure is not broken in non-obvious ways. You need to layer constraints, tests, feedback systems for self repair and handle memories across contexts.
I recently vibe coded 100K LOC across dozens of apps, I feel the rush of power in coding agents but also the danger. At any moment they could hallucinate, misunderstand or use a different premise than you did. Going past 1000 LOC requires sustained focus, it will quickly unravel into a mess otherwise.
It is not harder if you don't care about or even understand what could go wrong. It is harder if you care and want to be as confident of this code as if it is your own hand-written code.
Feels like you are assuming everyone has your diligence and the diligence that exists in the industry isn't already rapidly decaying due to what's happening.
By some metrics (commit count, lines of code, latency to bug fix), I'm one of the most productive engineers in my org. My peers are constantly surprised that I still use zero LLMs in my development routine.
I'm holding on as much as I can. I love the thrill of programming and I'm definitely not holding the team back, although there have been idle threats about it being bad that seniors are not using AI enough.
My super power is that I've meticulously shaped the codebase I maintain to fit in my head at all times. It's only possible because I've been hands on with all of it. In my downtime I do idle refactors and often throw it away, just to explore more ideas to build even more implicit context.
I think we're going to have to have a generational shift in LLM context management before we get to something as capable. I certainly don't doubt we'll get there one day. But I will definitely feel depressed whenever that day comes. Solving architectural problems is my job, implementing the software is my reward - this is something that makes me happy to wake up every day. I accept that I'm a minority in the world of software development
Those metrics don't imply any level of productivity though. I could write 1000s of lines of code across hundreds of commits each day without delivering any value to anyone
> I am a programmer, not a rubber-stamp that approves Copilot generated code
That's just, like, you opinion?
Anyway, I don't see what the fuss is about. It's one of the two...
- Either the LLMs are very good and replace programmers. Yay, automation - we always cheer for that! Instead of working, humanity can, like, go fishing or something.
- Or the LLMs aren't that good and don't replace programmers. Yay, jobs!
Okay, but now what? Clearly, the industry is trending towards an entirely new style of doing programming. What are the longterm options going to be for those who don't enjoy this? Especially when there is a good chunk of people embracing it and adopting tools faster than any other tools for this proffesion have been adopted in the past. How will this end?
Ask people who thought compilers were stupid, generating wrong code and decided they preferred to keep writing assembly code...
For the great majority, the longterm option is going to be choosing another career.
The good and bad aspect of this approach to AI in tech is that it revealed really how many developers out there are merely happy with getting something to work and get it out the door before clocking out and not actually understanding the inner workings of their code.
This is almost inevitable when something industrializes; people maximize profit by quickly shipping things that barely works. We need someone who try to excel in technology, and AI just amplifies this need.
I find it to be actually a boon for small throw away side projects that I don't care about, and just want to have [1]
Actual code/projects? Detrimental
[1] E.g. I spent an evening on this: https://github.com/dmitriid/mop
> how many developers out there are merely happy with getting something to work and get it out the door
There's a very large number of cases where that's the right choice for the business.
Also for small cli tools and scripts that otherwise wouldn't get written.
whenever people complain about someone being "merely happy with getting something to work and get it out the door before clocking out" i wonder to myself if i'm dealing with someone that has The Protestant Ethic and the Spirit of Capitalism on their nightstand, or has never read Economic and Philosophic Manuscripts of 1844, or simply does not understand the significance of these two essays.
like ... you expect people to actually be committed to "the value of a hard day's work" for its own sake? when owners aren't committed to value of a hard day's worker? and you think that your position is the respectable/wise one? lol
in general it's safe to assume your conversation partner has not read every single essay you have and come away with the same exact thoughts
Where did they say anything about a "hard day's work"? Are you making up arguments to attribute to them, lol
And are you assuming the alternative involves not clocking out? Because "clock out, finish when there's more time" is a very good option in many situations.
No, it's not about capitalism and exploitation, hard work propaganda etc. You can work to the contract (e.g. strictly whats in your work contract and not "above and beyond") while still retaining the quality of the work. So reduce the quantity but not the quality. This is about a ton of bootcamp developers that were created in the last 10ish years, for which, unlike the rest of us, it is just a better paid job.
Given the remainder of the comment is "and not understanding the inner workings" it's safe to assume that "getting something to work" does not imply that it worked correctly.
Back in the days of SVN, I'd have to deal with people who committed syntax errors, broken unit tests, and other things that either worked but were obviously broken, or just flat out didn't work.
Taking a bit of pride in your work is as much for your coworkers as it is for yourself. Not everything needs to be some silly proles vs bourge screed.
I myself are among people I would trust least to approve any code. In general I am way too trusting that others either know better or have properly thought through their work.
In scenarios were especially the later might not be true it seems like a inevitable failure. And I am not even sure any fixes will be thought trough either... Which makes me rather sceptical of whole thing.
You want some advice from a 16 years-in-industry person? Not so long, but long enough: software, as all industries, are driven by metrics.
Metrics we understand, but that managers miss to understand sometimes. You are a means to produce. With the advent of AI, some very hyped people think and wish they could get rid of programmers.
You know what I am doing in the meantime? I built a business. I am just finishing the beta deploymet test now. It can go wrong? Yes.
But otherwise, be faced to be a number, a production chain thing in the future. Besides that, when they can get rid of you, you are going to be in a bad positio to move at that time. Invest time now in an alternative strategy, if you can.
Of course, I know nothing about you so I might be totally wrong. If you already have financial safety for the rest of your life, this does not apply as hard.
I am trying to buy more freedom on my side. I already had some, but not enough. You will not be free with a manager to report to, even if you are thinking you are doing a better job than he thinks. Or even if you are objetively doing it.
They will care about delivery in a rush, politics, self-interest (this is not different from any human, but you will depend on them), etc.
Just choose freedom :D
I totally agree, the employer requires me to take ownership of the code I pushed to the repository. I should not be enforced to use some tool if I think that the tool does wrong.
In a larger scope, I tend to break many "rules" when I code, because I say that my experience proves against it, and this is what makes me unique. Of course nowadays, I need to convince my team to approve it, but sometimes things that are written differently are free from certain flaws that I want in this very case to avoid.
-- EDIT --
I think that this management trend comes from the bad management principles. There's a joke that a bad manager is a person who knowingly that one woman delivers a baby in 9 months, will consider that nine women deliver a baby in one month. I'd say similar principle comes in here - they were bought by the commercials on how AI makes things faster, they have put the numbers into their spreadsheet and now they expect the numbers they pay get similar to the numbers on the sheet. And if the numbers do not fit, they start pushing.
I’ve yet to find answers to these two questions:
- If AI is making everyone so productive, why don’t we see that reflected in any existing productivity metrics?
- Why do companies feel the need to introduce 996 when AI is supposedly making us obsolete?
Also if their AI products are so great why are they hiring like crazy.
"I am a computer, not someone that checks the calculations done by ENIAC"
-- Lost Letters from the 1950's, Vol 1.
I find that my usage varies based on a cycle of: 1) Using the LLM like crazy to help refactor code, extract components for re-use to speed up things in the long run 2) The refactoring has sped up the work significantly, but now I need to pause and address technical debt that the LLM introduced.
Roles are evolving, and don't be a luddite! We're not programmers anymore, we're people someone to take the blame with the AI goes wrong. After all, you are hired to do the impossible job of reviewing all the code and catching all the mistakes. A perfect job for a machine, er, human! Embrace the change! Progress!
"Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt - though the larger narrative or news headlines next day would still be, 'AI is eating jobs'!"
I'm also reminded of that legendary old IBM quote from 1979: "A computer can never be held accountable. Therefore a computer must never make a management decision."
You want to pay me bank to rubber stamp bad code? Weird flex but alright!
I find them useful for a few tasks that I'd procrastinate on and delay anyway, but for others, it would honestly be faster for me to just apply my skills without getting to an LLM review mode. I've found that using an LLM poisons my flow in a way that is a bit draining, whereas if I just put the work in to maintain context in my head, it's faster and more enjoyable.
Things like low-risk documentation, updating and creating unit tests along with fixing them in agent mode, describing how some code works etc.. are all at least a little useful for me.
As a response to the AI negativity in the thread. Remember that this thing is in its infancy.
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.
Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).
GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).
Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.
Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.
[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.
If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.
There's a lot to unpack here but to me your comment sort of contradicts itself. You're saying these things are in their infancy and therefore not able to produce code at the standard of a skilled software engineer. But you also seem to have an axe to grind against code review, which is fine but wouldn't that mean code review is even more important? At least right now? Which is kind of the point of the article.
In its infancy, but still forced on people like it's a mature product.
The marketing around AI as a feature complete tool ready for production is disingenuous at best, and outright fraud in many cases.
> Some exec somewhere in the company decided everyone needs to be talking to AI, and they track how often you're talking with it. I ended up on a naughty list for the first time in my career, despite never having performance issues. I explain to my manager and his response is to just ask it meaningless questions.
That's not a career-switching issue, that's a company-switching issue. Most people will work for at least one company in their career where the people in charge are dickheads. If you can't work around them, go find a different company to work for. You don't have to throw away an entire career because of one asshole boss.
Also fwiw, resistance is more effective than you think. You'd be surprised how often a dickhead in charge is either A) easy to call the bluff of, or B) needs someone to show them they are wrong. If you feel like you're going to quit anyway, put your foot down and take a stand.
If people can carve out an easy / chill policing role in which all there is to do is watch dashboards many will take it. The key for orgs is to avoid making roles like this.
I don’t think AI is anywhere near the point of replacing humans yet. The main issue here is whether the use of these tools is forced or voluntary. I’ve seen quite a few companies where the boss tries to fully adopt AI productivity tools, but faces strong resistance during implementation.From the employees’ perspective, the boss might be moving too aggressively without considering the practical realities. From the boss’s perspective, it’s frustrating to see the pushback.This tension seems to be a common challenge at the current stage of AI adoption.
Just to add,many people tend to overestimate the power of AI. At least for now, vibe coding doesn’t play a significant role in building complex software. I recently read a Stack Overflow research report showing that:“Most respondents are not vibe coding (72%), and an additional 5% are emphatic it not being part of their development workflow.”It also noted that in a future with advanced AI, the #1 reason developers would still ask another human for help is “When I don’t trust AI’s answers” (75%).This clearly shows that human developers remain the ultimate
It wont replace humans in the foreseeable future as they can not reason or react to changes they werent trained on. Bosses are jumping on a hype train making decisions in fields they barely have expertise in, which is the frustrating part. They listen to false promises of other "founders". Bosses not listening to their employees always has been a key factor to frustration at work, these businesses have no right to succeed.
I have seen HN leaning negative on LLM code assistance. I like it personally. But should be thoroughly reviewed.
I'm a solopreneur. Yesterday, in 90 minutes, I developed an enhancement that would have taken a full day before. I did it by writing a detailed Markdown spec and feeding it to Copilot Agents running Sonnet 4.5. Copilot worked on it on a server somewhere while I ate lunch.
When I returned, I reviewed the PR. It wasn't perfect; one of the dozen or so tests that Copilot had generated didn't actually test what it purported to. So, I fixed that by hand. It wasn't a big deal. It was still quicker and took less cognitive effort than writing the entire PR myself.
I'll confess that part of me is pleased to be dismissed with epithets like "AI-pilled," because properly using LLMs is an enormous competitive advantage. The more negative sentiment around them there is, the less likely the competition is to be inclined to learn and master them.
You're a programmer, not a rubber stamp? The unemployment line is over there then, you aren't wanted.
I am living this but the CEOs of my company are also "active" programmers.
Even when I already hear from them that "it helps them in language they do not know" (which is also my experience) I get frown upon if on meetings I do not say that I am "Actively using AI to GENERATE whole files of code".
I use AI as rubber duck, generate repetitive code or support me when going into an new language or technology, but as soon as I understand it, most of the code given for complete, non hobby, enterprise level projects contains either inefficient code or just plain mistakes which takes me ages to fix for new technologies.
interesting to see the phrase 'programmer' coming back en masse - especially as someone who never really stopped using it.
I thought we were all 'full stack engineeers' now, otherwise the resume got thrown into the circular file?
Great. I wait with anticipation for the slide back to 'Calculator'.
Also, as someone who had lived through the cringeworthy programmer -> 'coder' transition (with coder originally having had a 'code monkey'-vibe), I am happy to see we're returning to adult words.
I am a programmer, not a rubber stamp that copy pastes StackOverflow answers I don't understand
Or clones a template repo and only tweaks a few files
Or imports libraries with code I've never read
I understand which category of people you are describing, but this is what a proper programmer actually does:
- Check stackoverflow only for very niche issues, never finds what he needs but reaches a solution reading multiple answers and sometimes used to post a better solution for his issue
- Have his own templates if he does repetitive and boring stuff (common), implements the complex logic if any first and get the rest done as fast as possible being mildly disgusted.
- Imports libraries and often take a look at the code noticing stuff that could be improved. Has private forks of some popular opensource libraries that fix issues or improve performance fixing silly errors upstream. Sometimes he is allowed/has time to send the fixes back upstream. When using those libraries sometimes he finds bugs, and the first thing he does is checking the code and try to fix them directly, no tickets to the maintainers, often opens a PR with the fix directly.
This conversation isn't for you, you're not a programmer, you're a developer, a modern day script kiddy.
Programmers wrote the StackOverflow answer and wrote that library.
This conversation is for true Scotsmen.
Yeah, I used to be in the top 100 SO users, so I wrote a lot of SO answers, and if you used Red Hat Linux, you probably used my library.
But according to your definition, I'm a script kiddy.
Too bad parsing english wasn't your strong point.
Working with AI is like working with an ADHD intern that says they understand the problem but then gets distracted by every possible thing and then tries to gaslight you into thinking their mistakes are your fault.
Its exhausting, infuriating, and a waste of time.
Funny, in that just moments ago I was describing AI coder to friends as a very drunk intern high on crack.
My experience is that using AI as a fancy code completion tool works very well and saves me a lot of time.
But, trying to let it define how to do things aka vibe coding, is a recipe for endless disaster.
AI coder can do great things but it needs someone to first define the architecture and forcefully guide it in the right direction at every step. If let loose, things go haywire.
I basically just use it every now and then to summarize api docs for me, every time I try to use it to solve a real problem it just flounders in context polluted by its previous failed attempts.
I generally find the whole process to be more frustrating and time consuming than just writing the code myself.
I am not interested in entire new architectural paradigms required to enable a mediocre code ad-lib bot.
This is starting to remind me of the switch from ASM -> C or more recently, the switch from pure HTML/Javascript to frameworks.
Technology advances and if you don't follow the advancements, you will suddenly find yourself laid off with no job prospects.
This happened to my cousin last year. He's in his early 50s and was the subject matter expert on decades old technology, which nobody really uses anymore. He was laid off and when he tried to get another software development job, many interviewers laughed at him for his outdated tools and techniques he used.
He ended up getting a job as a mechanic and will most likely not get back into IT.
You should try Ops.
US: This is a horrifically bad idea.
Mgmt: We MUST have this. Do it, I don't care
US: implements horrifically bad idea
Mgmt: Why does this not work? Why is this taking months? Our IT group sucks.
User: list crafts that software has automated
GPT-5: Typesetting and paste-up, film prepress/stripping, CMYK color separations, halftone screening, darkroom compositing/masking, airbrush photo retouching, optical film compositing/titling, photochemical color timing, architectural hand drafting, cartographic scribing and map lettering, music engraving, comic book lettering, fashion pattern grading and marker making, embroidery digitizing and stitching, screen-print color separations
Just yesterday I made some notes about a program I'd like to write (hobby project, to be open sourced). After that, the thought of using an LLM to turn the notes into an implementation squished the joy right out of me.
The better the code generated by LLMs get, the less there is of an incentive to say "no". Granted, we're not nearly there yet (even though media reports and zealous tech bros say otherwise). But - and this is especially true for organizations that already had a big code quality problem before the LLMs showed up - if the interpreter / compiler accepts the code and it superficially looks like it does what it should, there is pressure to simply accept it.
Why say no when we could be done now and move on!? Rubber-stamp it and let's go! Sigh. Maybe I'm overly pessimistic, reading the raves about LLMs every day grinds me down.
They force it on people because the U.S. is great at inventing new religions and selling them to people. It does not matter if it works. It does not matter if a company has a profit.
If company A buys $10 million non-working services from company B and B buys $10 million non-working services from A, both have $10 million revenue on paper.
They also have $10 million expenses, but revenue and hype is enough to warrant a P/E of 100.
Every company that sells the hot air or whose CEO has friends who sell the hot air needs to force "AI" on its employees in order to keep up appearances. In two years, when everything crashes, they will create a new religion and sell it.
Interesting to see how programmers seems to be separating into people embracing those tools and people rejecting them. I wonder if it's linked to liking the act of coding itself vs liking the results.
what baffles me is how much more rage is coming from any other creative workers (painters, filmmakers, musicians) than from programmers.
Why are programs - the result of the ingenuity of people working in software field - not protected against AI slop stuff.
Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
Because for other workers the threat is much bigger. I'm not a painter, filmmaker or musician, but now I can make a picture, a short movie or a song. Yes it will be mediocre, but if I'm fine with mediocre I no longer need those professionals.
Programs on the other hand still need developers to make. Also, we've seen decades of tooling evolution that (1) made developers more productive (2) failed to replace developers.
> Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?
because soulless code does not matter. For other fields the result is more subjective, I don't like movies with desaturated color palette, a lot of people like them, maybe LLMs can produce new genre of movies, which people who appreciate classic films or music find it soulless, and find it sad that the peasants kind of like these films and the whole thing a risk for their careers or whole craft and the human effort in making their art work.
In code its objective, either the result work or not work, I guess you can stretch "it works" to have a different meaning that can include maintainability where it starts to get more subjective, but at the end of the day you will also can get to a point where the whole thing can collapse under its weight.
I think this is the main difference in reaction to LLMs between different fields, fields that are subjective and more to sensitive to receiver taste you can notice a rage(I think range is an overstatement) against it, while fields where the result is objective the reaction from people is simply saying it does or doesn't work.
Because there already was slop written by humans, and I think that in many cases the AI slop is better looking.
>not protected against AI slop stuff.
Programmers are by and large not assholes adverse to sharing, which is why we have copyleft and stack overflow..
Coding is also a process, a process that you may need to go through many times. The creation and maintenance of expert systems
Artists tend to want to win it big once, never innovate, and use the government to force people to send them money.
I can't think about a more just punishment than what God is giving to programmers right now with AI generated code.
It was you who for decades did everything in your might to make programming as difficult as possible for normal people, inventing the most petty rules and complications in every place imaginable in order to protect your guild from intruders.
Being able to program does not require intelligence, it requires being able to memorize an immense number of random gotchas and grammar rules which makes code not work if you forget to honour them.
Things such as:
- If you add 2+2 your answer will be 22 unless you do it in the "right" way.
- Your code not compiling if you have "mixed tabs and spaces", and of course all text editors also hide tabs and spaces.
- Your code failing if you have forgotten a ; or a " or a ) or a } etc etc
- Having to set up a complicated development environment instead of being able to program straight away on the machine.
So at least I'm laughing while I'm praising God for punishing programmers with AI code. It helps normal people to work through your Byzantine and evil rules and empowers them to make functional programs, while you are left to tear your hair and grind your teeth when reviewing our AI-generated code.
Now you know how we felt all these years!
You realize that what you write makes zero sense. Those rules will all be there regardless if you generate your code by hand or with AI. So you should ask your god to give you higher cognitive skills instead
Memorizing random rules was a skill. Now the AI will memorize those rules, and it's just not a skill anymore.
The skill will instead be in architecting your program or system. Which requires more cognitive skills and intelligence than route memorization.
This feels like it would have been better off as a topic level reply to that Reddit thread, than as a short whiny blog post, or as a post to HN
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?
For people who are so confident (which, I'm not), it's an obvious step; developers who don't want to use it must either be luddites or afraid it'll take their jobs. Moving sales people to digital CRMs from paper files, moving accountants to accounting software from paper ledgers and journals, moving weavers to power looms, etc etc -- there would have been enthusiasts and holdouts at every step.
The PE-bro who's currently boasting to his friends that all code at a portfolio has to be written first with Claude Code and developers are just there to catch the very rare error would have been boasting to his friends about replacing his whole development team with a team that cost 1/10 the price in Noida.
Coding agents can't replace developers _right now_, and it's unclear whether scaling the current approach will allow them to at any point, but at some point (and maybe that's not until we get true AGI) they will be able to replace a substantial chunk of the developer workforce, but a significant chunk of developers will be highly resistant to it. The people you're complaining about are simply too early.
It tracks with the trend of computing being something you passively consume rather than something you do. Don't learn how anything works! Deskill yourself! Not that LLMs aren't a force multiplier.
> said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.
Really? This sounds absurd. "Instead of" means it doesn't matter how shit your work is as long as you're burning tokens? Or it doesn't matter how good your work is if you're not burning tokens? Name and shame
If you admin a google-workspace domain, you get metrics out of the box on agent usage.
I guess it's great for AI companies that they've managed to bait and switch "this will improve your productivity" to "this is how much time you're sinking into this, let's not care about if that was useful"
There are a bunch of companies out there that are tracking what percentage of their developers are using LLMs now.
I heard a rumor recently that AWS are doing this, and managers are evaluated based on what percentage of their direct reports used an LLM (an Amazon-approved model) at least once over a given time period.
Microsoft, Oracle, Amazon, to name a few
What I don't like about this take is that it implies that it could be that way. It implies the LLM could do the job of writing, leaving the programmer to just approve it.
It sounds anti-LLM, but it actually helps support the illusion that LLMs can do more than they actually can.
I don't think an LLM can write serious software on its own. If it could, there would be some extraordinary evidence, but all there is are some people spreading rumours. If you ask them for simple evidence of comparable performance (like a video), they shy away or answer vaguely.
The thing is not there yet, and I understand the optimism of some, but I also must emphasize that it's not looking great for LLM coding enthusiasts right now. There's no amount of proselitism that can make up for the lack of substance in their claims. Maybe they can trick investors and some kids, but that's not going to cut it in the long run.
Therefore, this is not a problem. I don't need to worry about it. If (or when) some evidence appears, I can then worry about it. This hasn't happened yet.
[dead]
That's a bit like an 1950s era pilot raging against autopilots because 'I am a pilot, not a rubber-stamp that does autopilot surveillance.'
Today, no commercial pilot would get the idea that they are there to fly straight for eight hours. They are there for when bad things happen.
I expect software development to go into a similar direction.
>why not just keep it voluntary, why force it on people?
People hate learning new tools, even if they are more efficient. People would rather avoid doing things than learning a tool to do it efficiently.
Even in this thread you can see simeone who is / was a Vim holdout. But the improvement from Vim to IDE will be a fraction of the difference compared to AI integrated IDEs.
Most companies I've worked with don't care if you use vim or an IDE.
I've worked with people using vim who wildly outproduce full teams using IDEs, and I have a strong suspicion that forcing the vim person to use an IDE would lower their productivity, and vice versa
>I've worked with people using vim who wildly outproduce full teams using IDEs
This is not due to the editor. Vim is not a 20x productivity enhancer.
>forcing the vim person to use an IDE would lower their productivity
Temporarily, sure. But there productivity should actually go up after they are used to it. This idea of wanting to avoid such a setback and avoiding change is what keeps people on such an outdated workflow.
In practice their productivity would drop temporarily then permanently as they'd find a job that didn't micromanage them.
I tried cursor but it felt impossible for me to create novelty there. It just only work on things which have been, more or less, in the training data.
Saying that the people are the problem instead of the tool is a lazy argument IMO. "Its not the companies fault, its the customer"
Did people force React? Cloud infrastructure? Microservices? You get it.
I know there are people still using PHP 5 and deploying via FTP, but most people moved on to be better professionals and use better tools. Many people are doing this to AI, too, me included.
The problem is that some big companies and influential people treat AI as a silver bullet and convince investors and customers to think the same way. These people aren't thinking about how much AI can help people be productive. They are just thinking about how much revenue it can give until the bubble pops.
> Did people force React? Cloud infrastructure? Microservices? You get it.
Actually, yes; People forced React (instead of homegrown or different options) because its easier to hire to, than finding js/typescript gurus to build your own stuff.
People forced cloud infrastructure; even today, if your 10-customer startup isn't using cloud at some capacity and/or kubernetes, investors will frown on you; devops will look at you weird (what? Needing to understand inner workings of software products to properly configure them?)
Microservices? Check. 5 years ago, you wouldn't even be hired if you skipped microservices; everyone thinks they're gooogle, and many startups need to burn those aws credits; thats how you get a dozen-machine cluster to run a solution a proper dev would code in a week and could run on a laptop.
Forcing react, cloud infra and microservices makes a lot more sense than forcing certain development tools. One is the common system you work, the other is what you use to essentially edit text.
Its basically the same. It abstracts away a layer of complexity, so you focus on different stuff. The inherent disadvantage of using these shortcuts/abstractions is only obvious if you actually understand their inner workings and their shortcomings - being cloud services or llm-generated code.
Today you have "frontend programmers" that couldn't implement a simple algorithm even if their life depended on it; thats not necessarily bad - it democratizes access to tech and lowers the entry bar. These devs up in arms against ai tools are just gatekeepers - they see how easy is to produce slop and feel threatened by it. AI is a tool; in most cases will improve the speed and quality of your work; in some cases, it wont. Just like everything else.
Not really...
If one person writes code only in react and another only in vue, in the same product, you have a mess.
If one person writes their react code in vim and another writes it in an IDE, you don't have a mess.
Adapt or perish.
AI is coming for you John Connor.
I see where you’re coming from, but there’s a small difference. Coding itself is mostly a routine tasks, turning ideas into working code. Humans really stand out in the important parts:creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t. AI can help with the routine work, but the creative and thinking parts are still human.And this is exactly where developers should focus and evolve themselves.
> creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t.
Are you aware that there are people that think that even now AI can do everything you describe?
It can.
The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.
When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.
A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.
So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.
People often blame LLMs for bad code, but the real issue is usually poor input or unclear context. An LLM can produce weak code if you give weak instructions but it can also write production ready code if you guide it well, explain the approach clearly, and mention what security measures are needed. The same rule applies to developers too. I’m really surprised to see so much resistance from the developer community, instead, they should use AI to boost their productivity and efficiency. Personally Iam dead against using CLI tools, istead IDE based tools will give your better visibility on code produced and betetr control over the changes.
Then its a problem.
They said the same thing about the loom. "I'm an artist, no machine can replace me!" Now it's all done by machine, and none of us worry about it. We're in the early stages of the same process with AI; history rhyme.
That may be the case some day, but I don't think it's going to happen with LLMs. They get too many things wrong via hallucinations (likely unfixable) and often they can go deep into an (incorrect) rabbit hole burning a ton of tokens at the same time.
Useful tools, but I think the idea that they'll replace programmers is (wishful? eek) thinking.
Yup.. took the loom 200 years, and it won't be overnight for AI either. But it will eat away at the edges, and do the simple things first. It already is, for those who embrace it.
People 70+ still often don't trust the computers because the machines make too many mistakes and are unreliable. There's no arguing with them.
Are you trying to say I'm old? Machines are deterministic.. LLM's are very much not.
High quality hand made clothes still exist and people do want to pay for them. Mass produced clothing made in sweats shops are what the majority of the people buy because that is where the capitalist companies drove the production.
They exist the same way the horse-and-buggy exist -- for a select few. They're the exception that proves the rule.
The loom dropped production costs immensely - even hand-made clothes are done with premade fabrics, they dont do it from scratch.
Mass produced clothing exists in many industrialized countries - typically the premium stuff; the sweatshop stuff is quite cheaper, and customers are happy paying less; its not capitalism, its consumer greed. But nice story.
[dead]