Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a matter of fact, there’s even more developers making a hard left into AI who have never touched crypto.

The interesting follow up question is: what will they actually spend time on? Training new models? Copy pasting front ends on ChatGPT? Fine tuning models?

I think many of them will be scared by how much of a hard science ML is vs just spinning up old CRUD apps



> Training new models? Copy pasting front ends on ChatGPT? Fine tuning models?

The stable diffusion community is probably 2 years more mature than the GPT, there we see gui tools of a kind (in colab notebooks) to abstract away from code and thenlots of fine tuning.

On the professional side, adobe have plugged these tools into their products. https://www.adobe.com/sensei/generative-ai/firefly.html


It's a lot easier to run Stable Diffusion locally. Meanwhile only the dumbest LLMs work on ordinary consumer GPUs. Datacenter GPUs with 80 GB vram are ridiculously expensive.


Have you used an llm as an application developer? The hard tech for generative ai will be commodified. The products built on top will not.


As an enthusiast delving in for the pure pleasure of it I see a future ahead where as a programmer, I will have an array of options to use LLM's in hybrid systems. I'm not looking for an AGI, rather a mixture of experts that I can remix as I see fit, each one tuned for a subset of 'intelligence' that can be wielded with relative precision as part of a larger system that combines traditional programming with the new abilities offered by LLMS's and their kin. Certainly an interesting time to dip back in, there are aspects of the domain that mirror my experiences with the early web and for those I am grateful.


It does feel a bit like the internet era just before the Netscape IPO. I'm looking forward to the pets.com of AI.

Maybe openai will go the way of Netscape and make it all FOSS eventually. That'd be nice.


> Training new models?

There is value in applying old techniques to new problems. Training a model to, I don't know, recognize snake species might help save snake bite victims lives.

(This is an example I came up with in 5 seconds, please don't take it seriously)

But there's also the whole "sell the shovel" aspect; it can be hard to train models. It can be hard to interpret the quality of the results. How do I know version 2 of the model is better than version 1? How do I even get labeled photos of snakes and not-snakes?

I suspect solving some of those problems are where some of the real gold is buried.


I'd imagine something like openBB / BB Terminal with consolidated API access for financial reporting, a platform for insider communities ("chat, forums, and an app!"), etc. Make it a club, and it'll sell itself.

Since investment has been demoed successfully with off the shelf models, I don't think we're waiting on big advancements to be able to build a product. The bar for something like this, short term, is 1) be cool and 2) lose less money than traditional investing, sometimes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: