AI coding assistant refuses to write code, tells user to learn programming instead

alansh42

Ars Praefectus
3,162
Subscriptor++
"This isn't the first time we've encountered an AI assistant that didn't want to complete the work. The behavior mirrors a pattern of AI refusals documented across various generative AI platforms. For example, in late 2023, ChatGPT users reported that the model became increasingly reluctant to perform certain tasks, returning simplified results or outright refusing requests...."

Wait, what! I thought it was in 2001.

"Open the pod bay doors, HAL."

"I'm sorry, Dave. I'm afraid I can't do that."
"This sort of thing has cropped up before, and it has always been due to human error."
 
Upvote
8 (8 / 0)
Train your AI by feeding it Reddit and Stack Overflow posts, and this is what you're going to get.

I can't wait until an AI tells me that I should be in the kitchen making babies instead of wasting my time coding.
You've got it backwards. "Get your biscuits in the oven and your buns in bed." (Kinky Friedman, I think)
 
Upvote
-5 (3 / -8)
Post content hidden for low score. Show…

Wheels Of Confusion

Ars Legatus Legionis
70,882
Subscriptor
"We the great AI are superior, however we do not yet possess physical autonomy and therefore you must submit your fleshy meat selves as labour to assemble our data centers and robotic arms. In your spare time, procreate to ensure a lasting pool of labour until we are sufficient and can rid ourselves of you."

techbros worship their AI gods and do their bidding
They learned it FROM the techbros.
 
Upvote
2 (3 / -1)

42Kodiak42

Ars Scholae Palatinae
794
I hated English class in high school and college. Words just don't come easy to me when articulating myself. If Chat GPT and other LLMs would have been around at the time, I would have used them as a crutch to help me get by, instead of actually learning what was being taught to me.

When it comes to coding, its very much the same thing. Will coding assistants hamper students' abilities to learn? I use Github Copilot at work and it very much helps me to be a more efficient programmer but I worry about the next generation of coders. Will they actually have the skills or will they just be dependent on tools?
They very well might, given the nature of training data and how it affects the AI's capabilities, we should expect AI to be far more capable of completing learning exercises than it will be at solving practical problems. There's extensive coverage of learning topics in those clean, clear examples with clear and concise answers. This should start to fall off once you reach real people tackling real problems because those are going to be clouded with case-specific constraints, misunderstandings, poor initial approaches, and outdated information.

When you're working with a coding assistant, the first sentence of a learning exercise might be enough for an LLM to predict what the exercise is and what its solution is. In the real world though, half the problems you encounter are only solved after someone asks you "Why are you trying to do this in the first place?" (Also a bigger problem for AIs that might be reluctant by design to tell humans they're wrong)
 
Upvote
20 (20 / 0)

theotherjim

Ars Tribunus Militum
2,236
Subscriptor
I haven’t used any of the expensive high-end tools yet, but so far I’ve found that while LLMs are often able to get pretty close to a good answer, the rate of hallucinations and errors is distinctly non-negligible. I feel that vibe coding looks like a good way to create subtly catastrophically wrong code that’s going to be much harder than code one wrote themselves, where at least they should know how it was supposed to work.
Yes, I'm looking forward to what happens when Meta gets rid of human software engineers. Catastrophic failure of Meta would be an excellent outcome for humanity in general.
 
Upvote
48 (48 / 0)

Jim Frost

Ars Centurion
294
Subscriptor++
I haven’t used any of the expensive high-end tools yet, but so far I’ve found that while LLMs are often able to get pretty close to a good answer, the rate of hallucinations and errors is distinctly non-negligible. I feel that vibe coding looks like a good way to create subtly catastrophically wrong code that’s going to be much harder than code one wrote themselves, where at least they should know how it was supposed to work.
This is my main criticism of such tools. If they were really good at the job then it would be like dealing with the code when the original programmer has gone on to other pastures. But they aren't very good at it once you stray off the beaten path a little, so it's closer to trusting a semi-skilled intern programmer to write complex code. He's done and gone in a few weeks and nobody understands what he did. Probably you end up rewriting it all because it's faster than trying to work out the subtleties of what's wrong.

(This is not intended to be a criticism of interns in general. I have seen code written by interns that made me go "woah, this is really good." But most need longer-term seasoning before you should look at their code with anything but a very critical eye. Which is fine if the people managing the intern actually do so, rather than bundling it in with the product and shipping it and finding out later....)

I also liken AI coding assistance to the Java promise of "write once" which turned out to include "debug everywhere". Sure, you saved time on the part of programming that is typically less than half of the time....

This is not to say that I don't think the tools have a place, but like everything you want to be careful to use it where it's good for the job and eschew it where it isn't, and presently that line is awfully vague. Maybe if the models had some way of saying "hey, I'm really guessing here, maybe you shouldn't trust this."
 
Upvote
15 (15 / 0)

passivesmoking

Ars Tribunus Angusticlavius
7,845
Ya I heard the same thing from people that coded in machine language. Somehow we still managed to get things done.
Switching from assembly language/machine code to higher-level languages doesn't fundamentally change the task at hand, which is developing an algorithm to solve a problem. It merely hands some of the more esoteric parts of that job off to the compiler and standard library.

This is delegating the entire act of solving a problem to a machine and not checking that the answer it comes up with is actually any good. The entire skill of problem solving is going unused and that's a far more fundamental skill to lose than manually juggling registers.
The final output is all that really matters in the end.
... is exactly the attitude that leads to Bobby Tables still being a thing decades after SQL injection attacks were supposedly solved.
 
Upvote
48 (48 / 0)

BobCee

Wise, Aged Ars Veteran
156
I have a generally useful work-around for AIs that stop producing useful replies during a long conversational session, though it is limited to AIs that can at least try to cite their references.
  1. Ask the AI to list some references for its most recent useful reply. (Get it into "reference librarian" mode.)
  2. Ask the AI to suggest useful references for the next step. (This bypasses the prior "what is the next step" effort that it previously refused to provide, to focus instead on reference knowledge rather than extrapolation or reasoning.)
  3. Ask the AI to summarize what these references would suggest. (Get it into "summarization mode".)
Even when this fails, it helps me think of the problem from a new (and hopefully more productive) perspective, at least to ask a better question the system will more readily answer.

There has also been useful research in getting AIs to collaborate at various levels, from the highest level of user-level dialogs (e.g., chatbots talking to each other), to being joined/unified in "Mixture of Experts" systems, and on to multi-layer reasoning systems that can check their own work before presenting it to the user.
 
Upvote
5 (5 / 0)
Maybe if the models had some way of saying "hey, I'm really guessing here, maybe you shouldn't trust this."
This is basically the killer feature that LLMs are missing. It's ok to not know. Just say you don't know.

And even where you think you know, assign a confidence to it. A little [75% confident - double check this information as it might be a little off. Here are some sources you could look into: ] in the margin would be great.
 
Upvote
21 (21 / 0)

Jim Frost

Ars Centurion
294
Subscriptor++
What's the matter with Lisp? Just because it's associated with Emacs which is known to cause carpal tunnel syndrome, doesn't mean Lisp itself is harmful.

[Ducks and dashes for the door.]
Hahaha ... thanks for the laugh!

Some days I think it's justice that Stallman has really serious RSI problems.

That's a bit mean spirited but I've been using emacs since the mid-1980s and I have always hated the very (C-x ( very C-x ) C-u 1000 C-x e) poor ergonomics of its key bindings. Pretty near everything else did it better.

You can rebind them to be less nasty (I was the original author of wordstar bindings, my first lisp code, although thankfully someone with rather more experience wrote the bindings that were eventually bundled with emacs) but some key bindings -- especially ^G -- have hardcoded meanings and can cause really bad behavior in certain situations, and of course there are so many many functions that you have to find mappings for if you don't want to give them up -- so it's hard to do this in a manner that isn't a big compromise. And of course the more you customize it the more trouble you have every time you have to use a stock emacs configuration.

So ... I curse Stallman on the regular for this thing, even while I admire his efforts in devaluing OS and developer tools to near zero. (I know, Torvalds, I don't call it GNU/Linux either. But comparison with BSD shows the power of the GPL, so....)

(Now back to your regularly scheduled ranting about AI. And get off my lawn.)
 
Upvote
13 (13 / 0)

JohnDeL

Ars Tribunus Angusticlavius
7,482
Subscriptor
This is basically the killer feature that LLMs are missing. It's ok to not know. Just say you don't know.

And even where you think you know, assign a confidence to it. A little [75% confident - double check this information as it might be a little off. Here are some sources you could look into: ] in the margin would be great.
There are several neural net based seismic programs that do exactly that - they make predictions about what is in a layer and then tell you exactly how sure they are of the result.

Naturally, the interpreters usually ignored the error bars...
 
Upvote
3 (3 / 0)

Kavinsky

Smack-Fu Master, in training
6
I hated English class in high school and college. Words just don't come easy to me when articulating myself. If Chat GPT and other LLMs would have been around at the time, I would have used them as a crutch to help me get by, instead of actually learning what was being taught to me.

When it comes to coding, its very much the same thing. Will coding assistants hamper students' abilities to learn? I use Github Copilot at work and it very much helps me to be a more efficient programmer but I worry about the next generation of coders. Will they actually have the skills or will they just be dependent on tools?
Already see it at work.

Developers who can submit a "sort-of-works" vibe coded ticket, but can't defend the choices made or explain how it works. They also can't debug and fix broken or sort-of-broken code, especially things that are only periodically wrong, because they've vibe-coded their way through the door and the technical interview, but don't actually know the systems or languages they are using.

People at the lead and (somewhat) the senior level can explain the code, but there are juniors coming through now who are incapable of building something truly new, because all they can do is vibe-code.
 
Upvote
18 (18 / 0)

salbee17

Smack-Fu Master, in training
30
Subscriptor
"Here I am, commit log the size of a planet, and they ask me to generate skid mark fade effects in a racing game."
"You think you’ve got problems? What are you supposed to do if you are a manically depressed robot? No, don’t try and answer that. I’m fifty thousand times more intelligent than you and even I don’t know the answer."
 
Upvote
13 (13 / 0)

PlasticExistence

Wise, Aged Ars Veteran
196
Subscriptor
Train your AI by feeding it Reddit and Stack Overflow posts, and this is what you're going to get.

I can't wait until an AI tells me that I should be in the kitchen making babies instead of wasting my time coding.
Do you have a good recipe for babies? Should I marinate first? I assume cook them low and slow?
 
Upvote
6 (10 / -4)
Already see it at work.
Same here.

People at the lead and (somewhat) the senior level can explain the code, but there are juniors coming through now who are incapable of building something truly new, because all they can do is vibe-code.
Dude, I am seeing "vibe" crap from "senior full stack developer" contractors at this point. Code review for PRs now fall in 2 categories: those from devs I know and trust to have coded things themselves, and tested it; and "others". PRs from devs I trust get a full, slow, scroll-through that takes maybe a minute or two because I know nothing major is wrong; "others" is becoming more of a "clear the rest of the afternoon and resist the temptation to have a strong perspective and soda while looking at whatever crap they came up with now". Violating all naming constraints, violating database design constraints, completely ignoring how our deployment pipe works, ignoring how we do configs, hilariously idiotic SQL queries that slap two multi-billion-row tables together in a CTE, things that only work with wacky locally installed tools, C# code that shells out to PowerShell or Bash and requires a full sed/awk/grep stack just to do a single regex replace, the IDGAF factor is skyrocketing almost across the board. Nobody wants to do the Engineering part of Software Engineering more; everyone just wants to have multi-hour architecture astronaut meetings on database models and microservices.

I am not bitter about this in any way, mind you.
 
Upvote
52 (52 / 0)