This would be a more compelling argument if the conversations weren't so extremely dull and derivative, with most of the articles written in LLMspeak. I see a lot of discussion and not a lot of substance; articles and discussions about AI have a much smaller chance of being compelling compared to any other technical subject posted on HN.
The signal-to-noise ratio seems worse than many other hypes but it's the general way hypes go.
It's really hard to separate the wheat from the chaff at this point but I've been positively surprised by the relatively few articles sharing their more advanced workflow, lessons learnt which helps me to avoid the traps, patterns emerging that taught me something new (or at least validated approaches I tried on my own which worked). Gets tiresome to keep pace so I try to not fall for FOMO, and avoid experimenting too much to not get lost until I see a pattern emerging from different sources.