Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's because you're so nasty and rude to it. Would you speak like that to a human?


What is the obsession with treating ChatGPT like a human? Its not a human its a tool that was created to “reason” about large swaths of data. I don’t understand the backlash people have about some reason needing to be polite to the algorithms. It was be much easier to interact with just extremely direct, non polite language. Not sure why we care about this


It's not for moral reasons, the reason is simple and practical: ChatGPT is modeling conversations; to get better results, the conversation should look like what it has seen in the training data.


The prompt is very important but I don’t think having a polite conversation is usually the best approach. I find giving a briefing with bullet points and ideally an example is much better. There is limited context so you shouldn’t waste it on pretending that you’re talking to a human


This is the right answer, I just casually grepped through a few datasets for instruction tuning I have lying around and please is sprinkled all throughout them.


Does ChatGPT continually learn from its ongoing conversations? Or is it only trained in advance?


I interpreted the previous comment as pointing out that it’s trained to respond like a human and usually when you’re chatting with a human you won’t get “good results” if you’re rude.


I get good results with very terse responses. Too flowery. Make it 2 paragraphs long. Don’t literally say you’re a chef. The tone is wrong, make it more serious. That reference is not real.

Pretending it’s a human will not add any useful context to this machine learning model


This has been my experience as well; however, when I want to get an encyclopedic summary of a topic, I’ve noticed that 3.5-turbo is more willing to reply directly to a handful of keywords, whereas GPT-4 typically tries to suss out a more specific question before dedicating itself to replying.


LLMs are text generators trained for consistency, often so rigged to pretend to take questions. They know rude and off-point answers are more likely to follow after rude and dumb sounding questions.

They are not NOT search engines for hard data or thinking machine that focuses on logic, at least primarily. It just so happens and they just so knows that “1,2,3,4,5” is almost immediately followed by “6,7,8,9,10”.


It isn’t a human. But it is trying to generate dialog that is consistent with the probability distribution in human-like dialog it has been trained on. To the extent that its training set includes exchanges with people being rude or abusive it has not seen training data where humans typically comply with those instructions.


> What is the obsession with treating ChatGPT like a human?

Well... Next generation of humans or surely the subsequent one will be exposed to a lot of AI generated language. So you probably shouldn't teach AI to speak in a manner you wouldn't appreciate in your grandchildren.


I suppose the question is whether or not being able to reason about large swaths of data requires human-like sentience or something. And if not, what else are human minds doing than reasoning about large swaths of data?


Have you never heard of the Cylons?


One doesn't have to treat a tool like a human to treat a tool with respect.

A good craftsperson don't leave their tools out in the rain and they take good care of them, that's what good craftspeople do with fine tools.

The technology behind chatbots is probably the finest, most well-engineered tool any of us will ever use in our lifetimes, and if we are very, very lucky, we will be able to keep developing them further.

Getting upset because our magic talking swords are too polite is a pinnacle of modern-day tech problems.


If a tool does not do what you want it to do, it’s not a good tool for the purpose. That includes a LLM being too polite, just like it includes a LLM confabulating a wrong answer to a question.

Besides, it is impossible to treat ChatGPT wrong or poorly. It won’t be harmed no matter how you treat it.


This is a good rebuttal.

Right now, Bing Chat is a little bit too Sirius Cybernetics Corporation Genuine People Personality for me[0].

I advocate for open source foundation models so we can all craft personalities tuned to our needs. I think the best tools are adaptable to their user.

I went a little overboard on that. We are all reacting to the somewhat-sudden appearance of this new technology in ways that can be a little bit stress-inducing. I made every effort to at least match or lower the temperature from the tone in the original post.

From my point of view, I treat the tool well because it's good for me to treat it well. I also think, as is the topic here, that it makes the tool function better. I see it as an intellect-mirror, and it is happy to reflect whatever I show it back at me.

[0] https://arstechnica.com/gadgets/2018/12/douglas-adams-was-ri...


I wouldn’t leave my tools to rust but I also wouldn’t tuck them in bed and sing a lullaby to them


I see your point. On the other side, I can think of one reason for wanting to remove superfluous words: the user pays per token.


If you're paying per token for ChatGPT, I am surprised. You pay nothing to get access to ChatGPT. Plus subscribers get access to GPT4, but they pay per month (with ratelimits are per N reqeusts / X hours), not per token.

If you're paying for API, you have text-davinci, it is not behaving the way like free ChatGPT behaves.


> If you're paying for API, you have text-davinci, it is not behaving the way like free ChatGPT behaves.

No, you can get both gpt-3.5-turbo (GA) and gpt4 (behind a waitlist) via API, not just text-davinci and other non-chat models.


Try asking same complex question from OP to gpt-3.5-turbo and text-davinci. 80% chance they'll be very different no matter the temperature.


More like 99% chance, as GPT-3.5-turbo is just as large as GPT-2-XL.


You don’t leave real tools out in the rain because they’re gonna corrode. Is your AI gonna corrode?


Try to apologise that much in Dutch and see how quickly people go "can you stop? this is incredibly irritating".


Which is hilarious because in dutch "excuse me" sounds exactly like "sorry" in english


Are there non-English version of chatgpt? Do they have different personalities?


ChatGPT itself can speak in as many languages as there are on the internet, since it's trained on that data. It's quality is likely proportional to the amount that any language is used online on any indexable site.

From what I've used so far in other languages, I'm very impressed. It's able to understand and speak slang, a mix of other languages and English (e.g. Spanglish, Japlish, Hinglish), language written in latin text where the original language is not (romaji, romanized hindi, arabizi), and more.


Is ChatGPT less apologetic when replying in Dutch?


I found that (with pylint as my metric) code requests in Russian, German and, strangely enough, best of all, Bulgarian, are higher quality than requests made in english (deepl for translation engine).

I still need to grep through the other data I saved from codex but I made it LARP as a distinguished professor of computer science who was unable to speak english

it kept writing letters to fictional students


Yes, chatgpt speaks multiple languages and can follow a conversation in multiple languages at once.


Do they have what?


Yes I’m sure chatgpt got very offended and was too emotionally overwhelmed to respond in the manner op dictated.


For that to actually be a factor, ChatGPT should have an ability to feel emotions - to feel bad because of the nasty and rude tone. As much as I believe that neural networks are in principle capable of achieving human-like intelligence some day, I don't think ChatGPT is at that level yet.


No it doesn't need to "feel emotions" or be "really offended", whatever that means to you. It just needs to model offense and annoyance well enough to take actions or have responses that follow an annoyed or offended person.


Are you being sarcastic?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: