Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> based on a premise that human intelligence cannot be reduced to next token prediction

It can't. No one with any credentials in the study of human intelligence is saying that unless they're talking to like high schoolers as a way of simplifying a complex field.



This is either bullshit or tautologically true, depending specifically what you mean. The study of human intelligence does not take place at the level of tokens, so of course they wouldn't say that. The whole field is arguably reducible to physical phenomena though, and fundamental physical beables are devoid of intrinsic semantic content, and thus can be ultimately represented by tokens. What ultimately matters is the constructed high dimensional network that relates tokens and the algorithm that can traverse, encode and decode this network, that's what encodes knowledge.


No. You're wrong about this. You cannot simply reduce human intelligence to this definition and also be correct.


Why?

Frankly, based on a looot of introspection and messing around with altered states of consciousness, it feels pretty on point and lines up with how I see my brain working.


Because...?


For the same reason you can't reduce a human to simply a bag of atoms and expect to understand the person.


But humans are a specific type of a bag of atoms, and humans do (mostly) understand what they say and do, so that's not a legitimate argument against the reducibility of "understanding" to a such a bag of atoms (or specific kind of next token prediction for LLMs).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: