I'm no longer at the company but yesterday was my last day so my information is still current.
The division I worked in was demanding that developers use AI at least once a week and they were tracking people's usage. The boss nagged about it every day.
I had no problem meeting the requirement but did find it's contributions to be very hit or miss as to usefulness.
We have Cursor, it’s a step back from using Claude + MCP and it hallucinates a lot because of poor context management. But that’s not the real reason I’m using LLM’s less than before.
* The codebase consists of many modules with their own repo each
* The test back end has a lot of gotchas, borked settings or broken ongoing work by the back-end team
* The code is incredibly layered
I’m spending up to 10% of my time writing actual code. The rest is overhead like multi-repo PR’s, debugging, talking to people etcetera.
Once I found the issue, the code is the easy part and explaining it all to the LLM is more work.
Of course that’s a risk, but is it a different risk than GitHub stealing code from your private repos? In other words, do you just trust the AI companies less or do they not offer “we don’t steal your code” contracts?
Has your company tried running the models locally, or is that maybe just presumed to be not worth the effort?
Same currently. This is actually a risk in itself though. /Some/ of your devs are going to circumvent policy and use an AI assistant. It is better at this point to have a tool available where you have a business level agreement vs. burying your head in the sand and believing that everyone is going to follow the org policy of 'no AI'.
Not being forced, but the peer pressure is getting pretty strong.
Claude Code i will admit i find occasionally useful, but the flood of overly verbose and lackingly meaningful "AI Summaries" I'm being forced to waste time reading is really grating on me. Copilot PR summaries turning a 20-line PR into a fifty-line unhelpful essay is driving me insane.
Not forced but the tooling has been made available to those who ask. Work have provided Microsoft Copilot through Teams and Github Copilot through my IDE of choice.
I found the Microsoft Copilot to be reasonably good when given a complete context with extremely limited scope such as being provided a WSDL for a SOAP service and asked to write functions that make calls and then writing unit tests for the whole thing. This had a right way and a wrong way of doing things and it did it almost perfectly.
However, if you give it any problem that requires imagination with n+1 ways of being done it flounders and produces mostly garbage.
Compared to the Microsoft Copilot I found the Github Copilot to feel lobotomised! It failed on the aforementioned WSDL task and where Microsoft's could be asked "what inconsistencies can you see in this WSDL" and catch all of them, Github's was unable to answer beyond pointing out a spelling mistake I had already made it aware of.
I have personally tinkered with Claude, and its quite impressive.
My colleagues have had similar experiences, with some uninstalling the AI tooling out of frustration at how "useless" it is. Others, like myself, have begun using it for the grunt work; mostly as "inteligent boilerplate generator."
Also I can't imagine how being handed a bunch of autogenerated terraform and ansible code would help me. Maybe 10% of my time is spent actually writing the code, the rest is running it (ansible is slow), troubleshooting incidents, discussing how to solve them & how to implement new stuff, etc.
If someone works in a devops position where AI is more of a threat, I'd like to hear more about it.
I use Claude code with terraform all the time. It’s particularly good when your codebase is well modularized or at modularizing existing terraform.
It’s also quite good at getting to a solution by using validate/plan loops and figuring out syntax/state issues.
The biggest challenge is the lack of test sophistication in most terraform setups.
But llms generally are _amazing_ for my ops work. Feeding a codebase into one and then logs I’ve seen Claude code identify exact production issues with no other prompting. I use llms to help write incident reports, translate queries in the various time series db we use, etc.
I’d encourage you to try an llm for your tasks, for me for ops it’s been a huge boon.
Yes. I work for a large financial institution and they are all in on AI. All managers and tech leads have been instructed to apply AI as much as possible and to shoehorn it into every single thing because the company has made a BIG public announcement that their future is AI. So now they are desperately trying to find ways to back up those claims.
To be honest. I think it's pretty cool tech (I mostly use copilot with either Claude Sonnet 3.7 or 4, or otherwise GTP 4.1). Agent mode is cool. I use it every day and it has helped me work faster, do better work by it preemptively catering for things that might have otherwise taken many iterations of releases to discover, so yeah, I think AI is pretty good for software developers overall. It's a great tool to have. Is it going to do my work and leave me redundant? Not any time soon. I think the company I work for will fail in their enforced AI efforts, spend a gazillion dollars and will go quietly back to outsourcing overseas when the dust settles. I feel sad for the junior devs though as they are basically vibe coding their way through Jira tickets atm. I am a graybeard, 30+ years in the industry.
When it works for unit tests I love it. Every once in a while it’d just work and save me time. Unfortunately a vast majority of the time I could never let it just write them and not rework them because I’ve never seen an LLM know how to write code review ready code. It takes weird short cuts and sidesteps more efficient and readable ways of doing things. And a lot of the time it was just simply wrong in its approach and I’d find myself arguing with it so much that it made more sense to write the test myself. This was on iOS and using Swift which I think most LLMs generally suck at for whatever reason, probably due to all the bad and old advice on the internet as Swift continues to change.
Ours is going the other way and wanting people not to use it. A losing battle but the people making decisions are a bit fuddy duddy about this sort of stuff, we just keep getting links posted about how much energy it consumes talking to chatgpt.
Yes. However, really it’s our lead VC investors who are forcing it and want to see us have 75-100% AI teams. Yet now we’re in a mini panic after customers said the core functionality of my team’s product isn’t up to muster. So we’ll probably put the AI features on hold while marketing calls our non-AI features “powered by AI”.
I suppose us engineers familiar with the product had just a bit more context than the investors issuing a blanket statement to their portfolios to use AI.
This era of forced AI features is so infuriating. Emotional anxiety ridden greedy investors are running around trying to design all our products for us. There is a great lack of critical thought and actual purposeful design due to all this hype.
It's not forced, but the atmosphere has definitely shifted. These days, before we even start on a task, the first question is often "Can we solve this with AI?" Over time, it starts to feel like we're working around the tool instead of focusing on the actual problem.
No. My employer released their official AI policy not long ago, and it codifies what has been their unofficial policy for a while now: you can use it if you want to, but you'll have to pay for it out of your own pocket.
Yes, and we even hired a guy to do it. He's a young fellow who has been using every AI tool under the sun, seemingly forever. Also well connected in the space. He comes up with various suggestions about how to use all the tools.
I'm certainly seeing the benefits. A lot of tasks are faster with AI. Even some quite fiddly bits of log-diving and finding subtle bugs can be done by AI, which would have taken me considerably longer.
I'm still finding that overall architecture needs to be done by me, though. Once you make the task big enough, AI goes off the rails and makes some really odd stuff.
No one’s being forced, but we’re encouraged to explore and experiment with AI tools. And not just for writing code. It's a quite firm belief in the company as a whole that the winners in the 'AI age' will be the companies that are able to utilize AI tools improve their internal workflows and become more productive. So we get to try out lots of different things, and we make sure to share our learnings with each other.
Non devs: forced to use AI tools. Devs: officially not forced but so strongly encouraged that it's de facto forced. Personally I agree with both the reasoning and execution of this process shift.
I work at a small web company (.net based, Netherlands) and we're just experimenting with it. We have a paid copilot subscription, but nothing about it is mandatory in any way.
But this place is conservative in the sense that self hosting is the norm and cloud services like Azure or even github (we self host Gitea) are not, other than MS 365 for Teams and e-mail.
This years management goals have been crafted with the help of our in-house AI assistant.
The development and project teams I primarily work with are all encouraged to identify suitable use cases for GenAI. Most development teams have already started trials with AI assisted coding but reported a relatively low adoption rate of 5–10%.
At least 20% of code must be AI generated with the goal of at least 80% by the end of the year. CEO declared that vibe coders create better solutions because they are “goal oriented” as opposed to traditional coders that are “process oriented”.
Our company is relatively old (20+ years), mid size, hardware oriented and has a lot of people of all ages. For now neural network use is carefully allowed and encouraged to try out of a list of allowed LLMs only, but it's not mandatory and not forced (yet). There is also an internal project based on NNs to automate log analysis, but it is very early from possibly becoming a useful product, too many noise and useless non-actionable results.
Not us, but I know people who are coerced to use AI for programming, where for example KPIs are tied to LLM usage.
Is this similar to companies forcing TDD or extreme programming or pair programming on their employees? Some manager hoping to get more productivity by imposing a tool or technique?
Forced, no. The consulting company that employs me is talking about AI constantly and our internal viva engage is full och people talking about it. None of them are programmers.
The client I work at, through them, has made some tools available but no-one is using them for anything.
Not forced yet, but we have a lower risk appetite than your average firm. If it is used, instructions are clear that the output should not be treated as gospel and verse. Haven't heard of any major issues as a result of unfiltered, unreviewed AI dumps (yet).
No, and most seems to avoid using AIs for pretty much anything. The usage I've seen has been mostly inspirational.
We are allowed to use AI for coding, with the understanding that we are responsible for the code generated by the AI, not legally obviously, but functionally.
We're among the companies that decided to be "AI-first" - whatever that means. They are spending huge amount of money and effort to deploy AI tools such as Claude Code, Cursor, etc.
I'm kinda worried about how the massive usage of AI coding tool will affect the understanding of large codebases and complex systems, but to be totally honest I'm really impressed by Claude Code and how it can write Terraform/Helm/Ruby based on the company's idioms (and I'm talking about a repository with 250k+ lines of HCL!).
In our case, we have strong security guidance about which MCP to use and how; otherwise we're free to use the coding AI tool of our choice (Claude Code, Cursor, ...). There's not KPI about LLM usage _yet_, but I feel it coming soon.
There are also many workshops about how to build with AI etc, so it's slowly becoming part of everyone's work
Using Gemini Pro. No, I adopted it all on my own, but work pays. I love it, even though I can't use it on everything. Most developers here are using it, with approval on a per-projext basis.
Lately it's taken over code reviews, for myself and when I review other people's code. Extremely helpful. It's made software development fun again.
if we get “ai” to summarize meetings why not use that summary to get “ai” to implement the ideas from that meeting while we are sipping coffee and reading the paper? :)
Not forced, no. I guess you could say "encouraged" but honestly I don't think our people need a lot of encouragement. There seems to be a lot of inherent demand for AI tools, to the point that some people are chafing at our inability to move even faster on rolling stuff out.
The place I work seems to be open to the fact that its not an all seeing, all knowing force in the world. Though we do use it as a quicker search engine.
I've heard of companies that are shoehorning it into everything, I feel this is many companies just playing the game to get better valuations.
there's a reason LLM + internet search is quickly becoming the most efficient way to find information, it ingests text content, strips all fluff, customizes it to your specific query. Some offer quick source links too if you want to validate, but nobody is anything of that.
Google-Fu is being replaced with Prompting-Fu
not being allowed or choosing not to spend time learning the limits, benefits and drawback of different LLM models is basically handicapping yourself.
Sure. But it's like... the LLM is a tool to super charge your human association thinking. It's not a search per se, and every time you get the correct answer that's just happy accident (although I realize it happens a lot).
I'd like to see an actual LLM+search system that dreams up hypothesis and then tries to falsify them and confirm them with actual search. That seems like it could be a great search system.
But that's not what we have today afaik. What we have are systems that pump out "you can't lick a badger twice" type misinformation on a massive scale, all unique.
Yep, the same thing happened while Blockchain was a thing, all the companies were suddenly doing it to look valuable in front of shareholders or the board but in reality it is a niche thing that isn't useful to most companies
I usually use AI to draw pictures, write texts, and organize materials. For example, when I make PPT or WeChat articles, I let AI help me come up with titles and polish paragraphs, which saves me a lot of time.
It's really interesting to see the extreme contrast between the constant praise of AI coding tools here on HN vs the actual real world performance as seen recently on public Microsoft repos, where it utterly fails at even the most basic tasks.
I'm pretty surprised people here are saying anything good about copilot honestly. Its PR summaries and reviews are, for me, basically worthless, and i turned off the autocomplete snippets after the first day when they were always tantalizingly close to being right, but then never actually worked.
It's considered a minimum skill requirement to know how and when to use AI and to actually then use it, yes. I haven't seen managers enforce it but the CEO already said so. In practice there are still people who are resistant of course.
Our company is positioned right at the edge of the wave for this though so it's understandable.
Not forced, but strongly encouraged. Even security guidelines that we had been required to follow for the previous 10 years are being thrown out the window to make way for the AI train.
I've already communicated that I don't want to see nor hear the "but AI generated it this way" opinions. But other than that, I can see the potential, and I'm using it as well. Just not for generation of production code, rather to test assumptions, maybe initial implementations, to make things faster, but in the end I'm always reimplementing it anyway.
Also, to be completely honest, AI does better code reviews than most of my coworkers.
Not forced, at least of yet. The executive wing won't stop talking about it though. I imagine I'm gonna start getting the stink eye at some point since I don't use it.
We recently got Claude Code and there is a very strong push to use it.
I recently did for the first time. Spent 15 minutes writing a long prompt to implement a ticket. A repeated pattern of code, 5 classes + config per topic that deeply interact with each other and it did the job perfectly.
It convinced me that the current code monkey jobs, which are >90%, >95%? of software engineering jobs, will disappear within 10 years.
We‘ll only need senior/staff/architect level code reviewers and prompt engineers.
When the last generation that manually wrote code dies out, all people will do is prompting.
Just like assembler became a niche, just like C became a niche, high level languages will become a niche.
If you still don‘t believe, you haven‘t tried the advanced tools that can modify a whole project, are too incompetent to properly prompt or indeed work in one of the rare, arcane frontier- state-of-the-art niches where AI can‘t help.
I think I have a pretty different view, though maybe it hinges on the bit about 9 in 10 software people being code monkeys, or what that means. To the extent I agree that LLMs are going to eliminate coding jobs (permanently), they're going to be the ones you could basically do with StackOverflow and Google (when those things worked).
I think there's a cohort thing going on here, because Google has been spam rekt for long enough that entire classes of students have graduated, entered the workforce, and been promoted all since search and SO went to hell, so the gizmo that has the working code and you just need to describe it seems totally novel.
But we've been through this before: one day there's this box and you type a question and bam, reasonable code. Joing a FAANG is kind of like that too: you do the mega grep for parse_sockaddr_trie and there's this beautifully documented thing with like, here's the paper where it shows its O(ln).
But you call the thing and it seems to work and you send the diff and the senior person is like, that doesn't do IPv6 and that's rolling to us next quarter, you should add IPv6. And the thing was exploiting the octets and so its hard.
The thing is, a sigmoid looks exactly like an exponential when you're standing on it. But it never is. Even a nuclear bomb is exponential very briefly (and ChatGPT is not a nuclear bomb, not if it was 100x more capable).
Think about defense, or trading, or anything competitive like that: now you need the LLM because the other guy has it too. But he's not watching YouTube all day, he's still chain-smoking and taking adderall except he has an LLM now too.
So yeah, in the world where any of 9 TypeScript frameworks would all make roughly the same dark mode website and any of them get the acquisition done because really the founder knows a guy? That job is really risky right now.
But low effort shit is always risky unless you're the founder who knows a guy.
In college, newcomers will start with the basics of high level languages and then spend the rest of the time learning prompting.
Just like nowadays assembler is only a side note, C is only taught in specialized classes (OS, graphics) and most things are taught in high level languages.
The same way most of us review our compiler generated code today (ie not at all). If it works it works, if doesn't we fix the higher level input and try again. I won't be surprised if in a few more generation the AI will skip the human readable code step and generate ASTs directly.
> if doesn't we fix the higher level input and try again
How can I visit this fantasy world of yours where LLMs are as reliable and deterministic as compilers and any mistakes can be blamed solely on the user?
It's really easy to make unsubstantiated claims about what will happen decades from now, knowing your claims will be long forgotten when that time finally comes around.
Crawling up the abstraction ladder and 'forgetting' everything below has been the driving trend in programming since at least the 60s and probably before.
We for example have a whole generation of programmers who have no idea what the difference between a stack and a heap is and know nothing about how memory is allocated. They just assume that creating arbitrarily complex objects and data structures always works and memory never runs out. And they have successful careers, earning good money, delivering useful software. I see no reason why this won't continue.
Similar to how the parents of today tell their children bedtime stories about the luddites who thought there would still be humans driving cars by 2020.
The following is a year 2,065 Bed Time Story featuring a childhood lesson of being adaptable: "Near the end of the 2020s those who rejected AI out of misunderstanding were left behind; those who embraced it grew wealthy and powerful. Meanwhile, the anti-adopters lived miserably, consumed by resentment, blaming everyone and everything for their plight except themselves and their own failure to adapt."
Your post is on a new account and the style is indirect and presumptuous. It says little more than 'I think ML tools are great'; it just does it in a way designed to annoy people. That would be my guess as to why it was downvoted.
I'm no longer at the company but yesterday was my last day so my information is still current.
The division I worked in was demanding that developers use AI at least once a week and they were tracking people's usage. The boss nagged about it every day.
I had no problem meeting the requirement but did find it's contributions to be very hit or miss as to usefulness.
We have Cursor, it’s a step back from using Claude + MCP and it hallucinates a lot because of poor context management. But that’s not the real reason I’m using LLM’s less than before.
* The codebase consists of many modules with their own repo each
* The test back end has a lot of gotchas, borked settings or broken ongoing work by the back-end team
* The code is incredibly layered
I’m spending up to 10% of my time writing actual code. The rest is overhead like multi-repo PR’s, debugging, talking to people etcetera.
Once I found the issue, the code is the easy part and explaining it all to the LLM is more work.
Assistive coding tools need to get a lot better
We are forcing non-use because of compliance. There is a fear that the models will scan and steal our proprietary code.
Of course that’s a risk, but is it a different risk than GitHub stealing code from your private repos? In other words, do you just trust the AI companies less or do they not offer “we don’t steal your code” contracts?
Has your company tried running the models locally, or is that maybe just presumed to be not worth the effort?
> is it a different risk than GitHub stealing code from your private repos?
Putting company code into a private github repo would be a firing offense where I work.
I think GP is talking about the case where the company hosts its code in a private repo.
Where is this and are you hiring?
Same currently. This is actually a risk in itself though. /Some/ of your devs are going to circumvent policy and use an AI assistant. It is better at this point to have a tool available where you have a business level agreement vs. burying your head in the sand and believing that everyone is going to follow the org policy of 'no AI'.
Not forced. Encouraged. Everyone used it earlier without revealing it, now it is open. Knowing how and when to use your tools properly is a good idea.
Not being forced, but the peer pressure is getting pretty strong.
Claude Code i will admit i find occasionally useful, but the flood of overly verbose and lackingly meaningful "AI Summaries" I'm being forced to waste time reading is really grating on me. Copilot PR summaries turning a 20-line PR into a fifty-line unhelpful essay is driving me insane.
Not forced but the tooling has been made available to those who ask. Work have provided Microsoft Copilot through Teams and Github Copilot through my IDE of choice.
I found the Microsoft Copilot to be reasonably good when given a complete context with extremely limited scope such as being provided a WSDL for a SOAP service and asked to write functions that make calls and then writing unit tests for the whole thing. This had a right way and a wrong way of doing things and it did it almost perfectly.
However, if you give it any problem that requires imagination with n+1 ways of being done it flounders and produces mostly garbage.
Compared to the Microsoft Copilot I found the Github Copilot to feel lobotomised! It failed on the aforementioned WSDL task and where Microsoft's could be asked "what inconsistencies can you see in this WSDL" and catch all of them, Github's was unable to answer beyond pointing out a spelling mistake I had already made it aware of.
I have personally tinkered with Claude, and its quite impressive.
My colleagues have had similar experiences, with some uninstalling the AI tooling out of frustration at how "useless" it is. Others, like myself, have begun using it for the grunt work; mostly as "inteligent boilerplate generator."
When you say Microsoft Copilot you mean inside Visual Studio?
At least in my 2.5 person devops team, no.
Also I can't imagine how being handed a bunch of autogenerated terraform and ansible code would help me. Maybe 10% of my time is spent actually writing the code, the rest is running it (ansible is slow), troubleshooting incidents, discussing how to solve them & how to implement new stuff, etc.
If someone works in a devops position where AI is more of a threat, I'd like to hear more about it.
I use Claude code with terraform all the time. It’s particularly good when your codebase is well modularized or at modularizing existing terraform.
It’s also quite good at getting to a solution by using validate/plan loops and figuring out syntax/state issues.
The biggest challenge is the lack of test sophistication in most terraform setups.
But llms generally are _amazing_ for my ops work. Feeding a codebase into one and then logs I’ve seen Claude code identify exact production issues with no other prompting. I use llms to help write incident reports, translate queries in the various time series db we use, etc.
I’d encourage you to try an llm for your tasks, for me for ops it’s been a huge boon.
Yes. I work for a large financial institution and they are all in on AI. All managers and tech leads have been instructed to apply AI as much as possible and to shoehorn it into every single thing because the company has made a BIG public announcement that their future is AI. So now they are desperately trying to find ways to back up those claims.
To be honest. I think it's pretty cool tech (I mostly use copilot with either Claude Sonnet 3.7 or 4, or otherwise GTP 4.1). Agent mode is cool. I use it every day and it has helped me work faster, do better work by it preemptively catering for things that might have otherwise taken many iterations of releases to discover, so yeah, I think AI is pretty good for software developers overall. It's a great tool to have. Is it going to do my work and leave me redundant? Not any time soon. I think the company I work for will fail in their enforced AI efforts, spend a gazillion dollars and will go quietly back to outsourcing overseas when the dust settles. I feel sad for the junior devs though as they are basically vibe coding their way through Jira tickets atm. I am a graybeard, 30+ years in the industry.
When it works for unit tests I love it. Every once in a while it’d just work and save me time. Unfortunately a vast majority of the time I could never let it just write them and not rework them because I’ve never seen an LLM know how to write code review ready code. It takes weird short cuts and sidesteps more efficient and readable ways of doing things. And a lot of the time it was just simply wrong in its approach and I’d find myself arguing with it so much that it made more sense to write the test myself. This was on iOS and using Swift which I think most LLMs generally suck at for whatever reason, probably due to all the bad and old advice on the internet as Swift continues to change.
Ours is going the other way and wanting people not to use it. A losing battle but the people making decisions are a bit fuddy duddy about this sort of stuff, we just keep getting links posted about how much energy it consumes talking to chatgpt.
Yes. However, really it’s our lead VC investors who are forcing it and want to see us have 75-100% AI teams. Yet now we’re in a mini panic after customers said the core functionality of my team’s product isn’t up to muster. So we’ll probably put the AI features on hold while marketing calls our non-AI features “powered by AI”.
I suppose us engineers familiar with the product had just a bit more context than the investors issuing a blanket statement to their portfolios to use AI.
This era of forced AI features is so infuriating. Emotional anxiety ridden greedy investors are running around trying to design all our products for us. There is a great lack of critical thought and actual purposeful design due to all this hype.
It's not forced, but the atmosphere has definitely shifted. These days, before we even start on a task, the first question is often "Can we solve this with AI?" Over time, it starts to feel like we're working around the tool instead of focusing on the actual problem.
No. My employer released their official AI policy not long ago, and it codifies what has been their unofficial policy for a while now: you can use it if you want to, but you'll have to pay for it out of your own pocket.
The company where i work is actually halting all AI Projects for next couple of months due to huge cost involved, however copilot stays. Fintech
I wouldn't say force, but it is advisable to use it if you don't want to fall behind....
Yes, and we even hired a guy to do it. He's a young fellow who has been using every AI tool under the sun, seemingly forever. Also well connected in the space. He comes up with various suggestions about how to use all the tools.
I'm certainly seeing the benefits. A lot of tasks are faster with AI. Even some quite fiddly bits of log-diving and finding subtle bugs can be done by AI, which would have taken me considerably longer.
I'm still finding that overall architecture needs to be done by me, though. Once you make the task big enough, AI goes off the rails and makes some really odd stuff.
No one’s being forced, but we’re encouraged to explore and experiment with AI tools. And not just for writing code. It's a quite firm belief in the company as a whole that the winners in the 'AI age' will be the companies that are able to utilize AI tools improve their internal workflows and become more productive. So we get to try out lots of different things, and we make sure to share our learnings with each other.
Non devs: forced to use AI tools. Devs: officially not forced but so strongly encouraged that it's de facto forced. Personally I agree with both the reasoning and execution of this process shift.
I work at a small web company (.net based, Netherlands) and we're just experimenting with it. We have a paid copilot subscription, but nothing about it is mandatory in any way. But this place is conservative in the sense that self hosting is the norm and cloud services like Azure or even github (we self host Gitea) are not, other than MS 365 for Teams and e-mail.
This years management goals have been crafted with the help of our in-house AI assistant.
The development and project teams I primarily work with are all encouraged to identify suitable use cases for GenAI. Most development teams have already started trials with AI assisted coding but reported a relatively low adoption rate of 5–10%.
At least 20% of code must be AI generated with the goal of at least 80% by the end of the year. CEO declared that vibe coders create better solutions because they are “goal oriented” as opposed to traditional coders that are “process oriented”.
The same CEO will have a meltdown over the bugs in an unmanageable codebase next year. Fun times ahead :)
A lot of people in tech are seeing the limitations of llms but it will take a year or two for the mainstream audience to get there.
They’ve basically created the ultimate “process oriented” solution to that problem.
Godspeed
Our company is relatively old (20+ years), mid size, hardware oriented and has a lot of people of all ages. For now neural network use is carefully allowed and encouraged to try out of a list of allowed LLMs only, but it's not mandatory and not forced (yet). There is also an internal project based on NNs to automate log analysis, but it is very early from possibly becoming a useful product, too many noise and useless non-actionable results.
Not us, but I know people who are coerced to use AI for programming, where for example KPIs are tied to LLM usage.
Is this similar to companies forcing TDD or extreme programming or pair programming on their employees? Some manager hoping to get more productivity by imposing a tool or technique?
> Some manager hoping to get more productivity by imposing a tool or technique?
bingo -let‘s see how that works out…
Forced, no. The consulting company that employs me is talking about AI constantly and our internal viva engage is full och people talking about it. None of them are programmers.
The client I work at, through them, has made some tools available but no-one is using them for anything.
Not forced yet, but we have a lower risk appetite than your average firm. If it is used, instructions are clear that the output should not be treated as gospel and verse. Haven't heard of any major issues as a result of unfiltered, unreviewed AI dumps (yet).
No, and most seems to avoid using AIs for pretty much anything. The usage I've seen has been mostly inspirational.
We are allowed to use AI for coding, with the understanding that we are responsible for the code generated by the AI, not legally obviously, but functionally.
We're among the companies that decided to be "AI-first" - whatever that means. They are spending huge amount of money and effort to deploy AI tools such as Claude Code, Cursor, etc.
I'm kinda worried about how the massive usage of AI coding tool will affect the understanding of large codebases and complex systems, but to be totally honest I'm really impressed by Claude Code and how it can write Terraform/Helm/Ruby based on the company's idioms (and I'm talking about a repository with 250k+ lines of HCL!).
It would also be interesting to know how using AI is encouraged.
What are best practices? What tools are genuinely helpful, such as automatic reviews in a build street, or sentiment analysis in commit messages?
In our case, we have strong security guidance about which MCP to use and how; otherwise we're free to use the coding AI tool of our choice (Claude Code, Cursor, ...). There's not KPI about LLM usage _yet_, but I feel it coming soon.
There are also many workshops about how to build with AI etc, so it's slowly becoming part of everyone's work
Using Gemini Pro. No, I adopted it all on my own, but work pays. I love it, even though I can't use it on everything. Most developers here are using it, with approval on a per-projext basis.
Lately it's taken over code reviews, for myself and when I review other people's code. Extremely helpful. It's made software development fun again.
Wait. You don’t review code any longer? Are you sure that’s wise? Or is that not what you mean?
had not considered this obvious application of ai , may I ask how the review quality fares (ie. does ai supplement human review or act on its own) ?
I use AI to review before a PR. Humans take it from there.
Quality far exceeds that of human reviews. It's becoming my favorite use case for AI.
No, but I wish it did for stuff like summarizing meetings, etc.
Everybody focuses on programming, but the real value in is project management imho.
if we get “ai” to summarize meetings why not use that summary to get “ai” to implement the ideas from that meeting while we are sipping coffee and reading the paper? :)
I'd be happy if it could use its transcript to:
- update docs
- update user stories on whatever project tracking tool you use
- check for inconsistencies between requirements and current flows
Those are all things that should be trivial-ish for an AI, and where the real value-speed up is.
Not forced, no. I guess you could say "encouraged" but honestly I don't think our people need a lot of encouragement. There seems to be a lot of inherent demand for AI tools, to the point that some people are chafing at our inability to move even faster on rolling stuff out.
The place I work seems to be open to the fact that its not an all seeing, all knowing force in the world. Though we do use it as a quicker search engine.
I've heard of companies that are shoehorning it into everything, I feel this is many companies just playing the game to get better valuations.
Using AI as a search engine seems like the worst of the worst. "You can't lick a badger twice" is a thing...
there's a reason LLM + internet search is quickly becoming the most efficient way to find information, it ingests text content, strips all fluff, customizes it to your specific query. Some offer quick source links too if you want to validate, but nobody is anything of that.
Google-Fu is being replaced with Prompting-Fu
not being allowed or choosing not to spend time learning the limits, benefits and drawback of different LLM models is basically handicapping yourself.
Sure. But it's like... the LLM is a tool to super charge your human association thinking. It's not a search per se, and every time you get the correct answer that's just happy accident (although I realize it happens a lot).
I'd like to see an actual LLM+search system that dreams up hypothesis and then tries to falsify them and confirm them with actual search. That seems like it could be a great search system.
But that's not what we have today afaik. What we have are systems that pump out "you can't lick a badger twice" type misinformation on a massive scale, all unique.
Yep, the same thing happened while Blockchain was a thing, all the companies were suddenly doing it to look valuable in front of shareholders or the board but in reality it is a niche thing that isn't useful to most companies
I usually use AI to draw pictures, write texts, and organize materials. For example, when I make PPT or WeChat articles, I let AI help me come up with titles and polish paragraphs, which saves me a lot of time.
It's really interesting to see the extreme contrast between the constant praise of AI coding tools here on HN vs the actual real world performance as seen recently on public Microsoft repos, where it utterly fails at even the most basic tasks.
I'm pretty surprised people here are saying anything good about copilot honestly. Its PR summaries and reviews are, for me, basically worthless, and i turned off the autocomplete snippets after the first day when they were always tantalizingly close to being right, but then never actually worked.
Yes, and it's tracked, so I've started shifting personal AI use to company-provide accounts to "get credit" for using AI more.
Encouraged for learning/examples, the company has an enterprise subscription for employees.
Permitted for development with the explicit caveat that code is always the responsibility of the people connected to the pull request.
It's considered a minimum skill requirement to know how and when to use AI and to actually then use it, yes. I haven't seen managers enforce it but the CEO already said so. In practice there are still people who are resistant of course.
Our company is positioned right at the edge of the wave for this though so it's understandable.
nah mate. the talk of ai usage has dwindled down but from time to time, i see people use it.
Not forced, but strongly encouraged. Even security guidelines that we had been required to follow for the previous 10 years are being thrown out the window to make way for the AI train.
I've already communicated that I don't want to see nor hear the "but AI generated it this way" opinions. But other than that, I can see the potential, and I'm using it as well. Just not for generation of production code, rather to test assumptions, maybe initial implementations, to make things faster, but in the end I'm always reimplementing it anyway.
Also, to be completely honest, AI does better code reviews than most of my coworkers.
Not forced, at least of yet. The executive wing won't stop talking about it though. I imagine I'm gonna start getting the stink eye at some point since I don't use it.
Force? No. But an awful lot of trainings, and clients always ask about AI strategy.
[dead]
We recently got Claude Code and there is a very strong push to use it.
I recently did for the first time. Spent 15 minutes writing a long prompt to implement a ticket. A repeated pattern of code, 5 classes + config per topic that deeply interact with each other and it did the job perfectly.
It convinced me that the current code monkey jobs, which are >90%, >95%? of software engineering jobs, will disappear within 10 years.
We‘ll only need senior/staff/architect level code reviewers and prompt engineers.
When the last generation that manually wrote code dies out, all people will do is prompting.
Just like assembler became a niche, just like C became a niche, high level languages will become a niche.
If you still don‘t believe, you haven‘t tried the advanced tools that can modify a whole project, are too incompetent to properly prompt or indeed work in one of the rare, arcane frontier- state-of-the-art niches where AI can‘t help.
> We‘ll only need senior/staff/architect level code reviewers
the problem with that is that if there are no juniors left...
I think I have a pretty different view, though maybe it hinges on the bit about 9 in 10 software people being code monkeys, or what that means. To the extent I agree that LLMs are going to eliminate coding jobs (permanently), they're going to be the ones you could basically do with StackOverflow and Google (when those things worked).
I think there's a cohort thing going on here, because Google has been spam rekt for long enough that entire classes of students have graduated, entered the workforce, and been promoted all since search and SO went to hell, so the gizmo that has the working code and you just need to describe it seems totally novel.
But we've been through this before: one day there's this box and you type a question and bam, reasonable code. Joing a FAANG is kind of like that too: you do the mega grep for parse_sockaddr_trie and there's this beautifully documented thing with like, here's the paper where it shows its O(ln).
But you call the thing and it seems to work and you send the diff and the senior person is like, that doesn't do IPv6 and that's rolling to us next quarter, you should add IPv6. And the thing was exploiting the octets and so its hard.
The thing is, a sigmoid looks exactly like an exponential when you're standing on it. But it never is. Even a nuclear bomb is exponential very briefly (and ChatGPT is not a nuclear bomb, not if it was 100x more capable).
Think about defense, or trading, or anything competitive like that: now you need the LLM because the other guy has it too. But he's not watching YouTube all day, he's still chain-smoking and taking adderall except he has an LLM now too.
So yeah, in the world where any of 9 TypeScript frameworks would all make roughly the same dark mode website and any of them get the acquisition done because really the founder knows a guy? That job is really risky right now.
But low effort shit is always risky unless you're the founder who knows a guy.
Will all its consequences for software correctness and security. Man, I wish i was born in the neolithic.
Humans still deploy SQL injectable code to production and offer unsecured heapdump endpoints…
> We‘ll only need senior/staff/architect level code reviewers and prompt engineers.
And what will you do when all the seniors retire and there's no juniors to take their place because they were replaced by AI?
In college, newcomers will start with the basics of high level languages and then spend the rest of the time learning prompting.
Just like nowadays assembler is only a side note, C is only taught in specialized classes (OS, graphics) and most things are taught in high level languages.
How will they be able to review AI generated code if they don't understand anything beyond the basics?
How will they be able to review AI generated code
The same way most of us review our compiler generated code today (ie not at all). If it works it works, if doesn't we fix the higher level input and try again. I won't be surprised if in a few more generation the AI will skip the human readable code step and generate ASTs directly.
> if doesn't we fix the higher level input and try again
How can I visit this fantasy world of yours where LLMs are as reliable and deterministic as compilers and any mistakes can be blamed solely on the user?
How can I visit this fantasy world of yours...
Wait 20 years.
10 years, 20 years...
It's really easy to make unsubstantiated claims about what will happen decades from now, knowing your claims will be long forgotten when that time finally comes around.
Crawling up the abstraction ladder and 'forgetting' everything below has been the driving trend in programming since at least the 60s and probably before.
We for example have a whole generation of programmers who have no idea what the difference between a stack and a heap is and know nothing about how memory is allocated. They just assume that creating arbitrarily complex objects and data structures always works and memory never runs out. And they have successful careers, earning good money, delivering useful software. I see no reason why this won't continue.
Somebody is aggressively downvoting a lot here. To that person: could we please use arguments instead of that single button?
Edit: I take that as a "no" :)
When the cognitive dissonance hits you in the face like a truck, you hit that button!
I forgive them, they are scared because their future looks as bleak as mine and that naturally causes strong emotions.
[flagged]
Similar to how the parents of today tell their children bedtime stories about the luddites who thought there would still be humans driving cars by 2020.
The following is a year 2,065 Bed Time Story featuring a childhood lesson of being adaptable: "Near the end of the 2020s those who rejected AI out of misunderstanding were left behind; those who embraced it grew wealthy and powerful. Meanwhile, the anti-adopters lived miserably, consumed by resentment, blaming everyone and everything for their plight except themselves and their own failure to adapt."
[flagged]
Your post is on a new account and the style is indirect and presumptuous. It says little more than 'I think ML tools are great'; it just does it in a way designed to annoy people. That would be my guess as to why it was downvoted.
https://news.ycombinator.com/newsguidelines.html
>Please don't comment about the voting on comments. It never does any good, and it makes boring reading.