All this talk about AI being "too agreeable" makes me worried that they will make it less agreeable, which will basically force me to justify myself to a freaking clanker, while performing actual practical tasks.
For example, I do not want to hear AI "opinion" on technical choices and architectural decisions that I made when using a coding assistant. If I wanted an "opinion" I would explicitly ask it to list pros and cons or list alternative solutions to a problem.
But I f I explicitly ask AI to do X, it should do X, instead of "pushing back" in order to appear less "sycophantic" (which is a term that is used to describe human behavior and is not applicable to a machine).
For example, I do not want to hear AI "opinion" on technical choices and architectural decisions that I made when using a coding assistant. If I wanted an "opinion" I would explicitly ask it to list pros and cons or list alternative solutions to a problem.
But I f I explicitly ask AI to do X, it should do X, instead of "pushing back" in order to appear less "sycophantic" (which is a term that is used to describe human behavior and is not applicable to a machine).