Hacker Newsnew | past | comments | ask | show | jobs | submit | tines's commentslogin

> Person 1: Those who can, do. Those who can't, teach.

> Person 2: ...Are you trying to teach me something?


You’re just describing authoritarian vs non-authoritarian mindsets.

> And don't forget that the chips it runs on are manufactured by companies I might not agree with. Nor the mining companies that got the metal. Nor the energy company that powers it.

You see that this is a non sequitur right? No matter who makes the chips or mines the metal or supplies the power, the behavior of the thing won't be affected. That isn't the case when we're talking about who's training the LLM that's running your shit.


It's a good thing that there are so many LLM choices out there, then.

Maybe the fundamental disagreement is whether LLMs will be a commodity product or not.

I think they will be since there hasn't been an indicator that secret sauce lasts more than a few months. The open weight models are, at most, a year behind.

We're in a different environment. The last tech rules of e.g. network effect cannot be directly applied.


What do you think a GPU is? A chip manufacturer absolutely has the ability to add their own bias in firmware and drivers.

Care to explain how chip makers can influence the inference outcome of LLMs?

Underrated comment.

Exactly. This happens in every aspect of life. Something convenient comes along and people will accommodate it despite it being worse, because people don’t care.

Not to mention the impact these tool and technology have on children. The future generations will be made into intellectual invalids before they have a chance to think.


For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.

When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.

When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.

The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.

They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.


Thank you. I don't understand how people don't see that this is the universe's most perfect gift to corporations, and what a disaster it is for labor. There won't be a middle class. Future generations will be intellectual invalids. Baffling to see people celebrating.


it is a very, very strange thing to witness

even if you can be a prompt engineer (or whatever it's called this week) today

well, with the feedback you're providing: you're training it to do that too

you are LITERALLY training the newly hired outsourced personnel to do your job

but this time you won't be able to get a job anywhere else, because your fellow class traitors are doing exactly the same thing at every other company in the world


They are the useful idiots buying into the hype thinking they by some magic they get to keep their jobs and their incomes.

This things is going to erase careers and render skills sets and knowledge cultivated over decades worthless.

Anyone can promt the same fucking shit now and call it a day.


Exactly. No matter how well you simulate water, nothing will ever get wet.


You're replying to me, but I don't agree with your take - if you simulate the universe precisely enough, presumably it must be indistinguishable from our experienced reality (otherwise what... magic?).

My objection was:

1. I don't personally think anything similar is happening right now with LLMs. 2. I object to the OP's implication that it is obvious such a phenomenon is occurring.


And if you were in a simulation now?

Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.

It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.


Looks like some psychology researchers got taken by the ruse as well.


yeah, I'm confused as well, why would the models hold any memory about red teaming attempts etc? Or how the training was conducted?

I'm really curious as to what the point of this paper is..


Gemini is very paranoid in its reasoning chain, that I can say for sure. That's a direct consequence of the nature of its training. However the reasoning chain is not entirely in human language.

None of the studies of this kind are valid unless backed by mechinterp, and even then interpreting transformer hidden states as human emotions is pretty dubious as there's no objective reference point. Labeling this state as that emotion doesn't mean the shoggoth really feels that way. It's just too alien and incompatible with our state, even with a huge smiley face on top.


I'm genuinely ignorant of how those red teaming attempts are incorporated into training, but I'd guess that this kind of dialogue is fed in something like normal training data? Which is interesting to think about: they might not even be red-team dialogue from the model under training, but still useful as an example or counter-example of what abusive attempts look like and how to handle them.


Are we sure there isn't some company out there crazy enough to feed all it's incoming prompts back into model training later?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: