Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since LLMs are just based on human output, we should trust LLMs (at best) as much as we trust the average human coder. And in reality we should probably trust them less.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: