The creator has an estimated net worth of $50 million to $200 million prior to Open AI hiring him. If you listen to any interviews with him, doesn't really seem like the type of person who's driven by money and I get the impression that no matter what OpenAI is paying him, his life will remain pretty much unchanged (from a financial perspective at least).
He also still talks very fondly about Claude Code and openly admits it's better at a lot of things, but he thinks Codex fits his development workflow better.
I really, really don't think there's a conspiracy around the Codex thing like you're implying. I know plenty of devs who don't work for OpenAI who prefer Codex ever since 5.2 was released and if you read up a little on Peter Steinberger he really doesn't seem like the type of person who would be saying things like that if he didn't believe them. Don't get me wrong, I'm not fan boy-ing him. He seems like a really quirky dude and I disagree with a ton of his opinions, but I just really don't get the impression that he's driven by money, especially now that he already had more than he could spend in a lifetime.
I didn't say he didn't care about money, I just don't think that's his main driver, especially since he's already set for life. He spent 10 years building a company around a genuinely valuable product that just about everyone was using and, yeah, it made him rich.
I think "I'm going to keep the money I made from the company I spent 10 years building" and "I'm not going to lie about the coding tools to try and court a deal with OpenAI" aren't contradictory values. If anything, after hearing him talk for a while, I think it's way more believable that he switched from CC to Codex because Anthropic sent lawyers after him over the ClawdBot name than because of an OpenAI deal.
Having a few hundred thousand doesn't make you greedy, it makes you fortunate.
Having a hundred times that does make you greedy. You had more than enough long before getting to that point. You could have been content with less, so the only reason to try to extract more out of others is greed.
I've reached a similar conclusion, though not by targetting technology specifically. Rather, I got into the habit of asking myself "Does X enhance my life in some way?"
It's interesting what this simple question can uncover.
How do you need to supervise this "less" than an LLM that you can feed input to and get output back from? What does it mean that it's "running continuously"? Isn't it just waiting for input from different sources and responding to it?
As the person you're replying to feels, I just don't understand. All the descriptions are just random cool sounding words/phrases strung together but none of it actually providing any concrete detail of what it actually is.
I’m sure there are other ways of doing what I’m doing, but openclaw was the first “package it up and have it make sense” project that captured my imagination enough to begin playing with AI beyond simple copy/paste stuff from chatGPT.
One example from last night:
I have openclaw running on a mostly sandboxed NUC on my lab/IoT network at home.
While at dinner someone mentioned I should change my holiday light WLED pattern to St Patrick’s day vs Valentine’s Day.
I just told openclaw (via a chat channel) the wled controller hostname, and to propose some appropriately themes for the holiday, investigate the API, and go ahead and implement the chosen theme plus set it as the active sundown profile.
I came back home to my lights displaying a well chosen pattern I’d never have come up with outside hours of tinkering, and everything configured appropriately.
Went from a chore/task that would have taken me a couple hours of a weekend or evening to something that took 5 minutes or less.
All it was doing was calling out to Codex for this, but it acting as a gateway/mediator/relay for both the access channel part plus tooling/skills/access is the “killer app” part for me.
I also worked with it to come up with a promox VE API skill and it’s now repeatable able to spin up VMS with my normalized defaults including brand new cloud init images of Linux flavors I’ve never configured on that hypervisor before. A chore I hate doing so now I can iterate in my lab much faster. Also is very helpful spinning up dev environments of various software to mess with on those vms after creation.
I haven’t really had it be very useful as a typical “personal assistant” both due to lack of time investment and running against its (lack of) security model for giving it access to comms - but as a “junior sysadmin” it’s becoming quite capable.
Great story. And it distills what the claw stuff is all about, in terms of utility is actually here. It's the multitude of "channels", out of the box, that you can enable that allow you to speak with the actual AI agent with access to the configured environment.
Yeah, and if you give another human access to all your private information and accounts, they need lots of supervision, too; history is replete with examples demonstrating this.
But there's typically plenty at stake for the recipient. If my accountant tried to use my financial information in some improper way, he'd better have a good plan for what comes next.
I don't have one going but I do get the appeal. One example might be that it is prompted behind the scenes every time an email comes in and it sorts it, unsubscribes from spam, other tedious stuff you have to do now that is annoying but necessary. Well that is something running in the background, not necessarily continuously in the sense that it's going every second, but could be invoked at any point in time on an incoming email. That particular use case wouldn't sit well with me with today's LLMs, but if we got to a point where I could trust one to handle this task without screwing up then I'd be on board.
Of course it can "model time". It has access to system clock and know its heartbeat rate. Can you "model time" when you are asleep? Whatever "model time" means, it sounds like projection to be frank.
> Or feeling things for that matter.
Philosophical zombie experiment, the conclusion is qualia dont matter, only IO. If two systems have the same behavior there is no meaningful difference.
Modeling time means there is some way of e.g. evolving state over time, or projecting change over a period of time. Or being able to count time passing (without being told by external sources).
The LLM has no model of time. It is being called at regular times. If the cronjob misses two days or even a whole year of calling the LLM, the LLM will not respond any differently from if the cronjob was on time.
You keep saying "LLM has no model of time" but that doesn't inherently mean anything.
If you give it clock as input, it will observe the passage of time and can model it. The entire explosion of the AI industry due to LLMs is the observation that their abilities generalize, so there's no reason to believe that LLMs can't "model time". They can, if you tell it to and give it proper input.
what are you guys running constantly? no seriously i havent run a single task in the world of LLMs yet for more than 5 mins, what are you guys running 24x7? mind elaborating?
The key idea is not running constantly, but being always on, and being able to react to external events, not just your chat input. So you can set a claw up to do something every time you get a call.
They're creating blogposts that try to character assassinate OSS maintainers that refuse the AI slop PRs in their repos. Next up I assume it'll be some form of mass scam, probably a crypto scam of some sort, yknow that kinda good stuff that's definitely useful for society.
You don’t understand the allure of having a computer actually do stuff for you instead of being a place where you receive email and get yelled at by a linter?
Perhaps people are just too jaded about the whole "I'll never have to work again" or "the computer can do all my work for me" miracle that has always been just around the corner for decades.
This is about getting the computer to do the stuff we had been promised computing would make easier, stuff that was never capital-H Hard but just annoying. Most of the real claw skills are people connecting stuff that has always been connectable but it has been so fiddly as to make it a full time side project to maintain, or you need to opt into a narrow walled garden that someone can monetize to really get connectivity.
Now you can just get an LLM to learn apple’s special calendar format so you can connect it to a note-taking app in a way that only you might want. You don’t need to make it a second job to learn whatever glue needs to make that happen.
Reading some documentation to figure out a format is something you do once and takes you a few minutes.
Are you a developer? Then this is something you probably do a couple times a day. Prompting the correct version will take longer and will leave you with much less understanding of the system you just implemented. So once it fails you don't know how to fix it.
I love that the posture is I have a problem I need you to fix haha.
I don't need you to fix my problems. I'm reporting that the LLM-based solution beats the dogshit out of the old "become a journeyman on one of 11 billion bullshit formats or processes" practice.
I'm not trying to help you, I'm just wondering how the LLM actually helps you.
You don't need to become a journeyman at understanding a format, you just need to see a schema, or find an open source utility. I just can't comprehend the actual helplessness that a developer would have to experience in order to have to ask an LLM to do something like this.
If I were that daunted by parsing a standardized file format for a workflow, I would have to be experiencing a major burnout. How could I ever assume I could do any actual technical work if I'm overwhelmed by a parsing problem that has out-of-the-box solutions available.
I’ll give you a real concrete example. I had to build an app on the Mac, which needed to be signed. I did not want to learn Apple signing procedures in order to do this. It turns out I did not have to, because I got the robot to learn it. So then I was able to finish doing what it was I intended to do without having to spend an afternoon or a day misunderstanding the Apple signing procedures.
Could I have learned these and become a more virtuous person by knowing apples signing rules? Maybe. What’s much more likely is that I might’ve just stopped doing this rather than deal with that particular difficulty. Instead, I was able to work on other problems that arose in the building of this application.
What I am suggesting to you is that I don’t have to fucking feel bad for being daunted anymore. And neither does anyone else. Folks that want to do that on their own time are free to, but I’m never going back.
There’s a lot of projects for people where this is gonna start to be the operative situation. Folks who might have gotten stuck on an early stumbling block are now just moving ahead and are learning about different and frankly more interesting problems to solve. I’m still beating my head on things, but they are not. “did I get this format just right?”
This shift is an analogous to how we took having to do computer arithmetic out of the hands of programmers in the 80s. There used to be a substantial part of programming that was just a computer arithmetic. Now, almost nobody does that. Nobody in this thread could build a full adder if their life depended on it or produce an accurate sin function. It used to be that that would’ve stopped you cold and trying to answer an engineering problem on a computer. Now it doesn’t. We do not run around telling people that they’re not engineers or that they’re not learning because we have made this affordance.
A full adder is literally one of the easier theoretical computer science concepts, and a sine approximation is a simple Maclaurin series. And yes, if you can't do a simple series expansion, you are not an engineer. You may be a developer, but not an engineer.
These are both first or second year bachelors topics. Just because you're unable to work through simple math problems doesn't mean any semi-competent computer professional would be.
Was it a good thing for anyone writing software which included those things to need to not only work out how they are on a blackboard but how they are on the real machine in question? And how they are on the next machine over?
Do you yearn to return to that world? I suspect most people don't. It's not just knowing your own machine, but any machine the code could run on. It's also not just reaching for some 2nd year bachelor topics when the matter at hand is much more complicated. Where does your sine approximation fail? How do you know? Can you prove that? Does the compiler or the hardware decide to do things behind your back which vitiate any of those claims?
Knowing the answer to that all every time you need a sine is not something 99.99% of engineers need to worry about. IT USED TO BE. But now it's not. No one is going back to that.
I don't know what world you live in, but I still definitely need to know the approximation error of the methods I use.
sin(x) has one of the simplest Maclaurin series:
sin(x) = x - x^3/3! + x^5/5! - x^7/7! ...
For any partial sum of that series, the error is always strictly less than the absolute value of the next term in the series. The fact that this was your example of a "difficult" engineering problem is uh, embarrassing.
For good measure, I would of course fuzz any component involving numerical methods to ensure it stays within bounds. _As any competent engineer would_.
And I absolutely work things out on pen and paper or a white board before implementing them. How else would I verify designs? I'm sure you're aware that fixing bugs is cheapest in the design phase.
Are you living in an alternate reality where software quality does not matter? I'm still living in the world where engineers need to know what the fuck they're doing.
Oh, IEEE 754 double precision floating point accuracy? Rule of thumb is 17 digits. You will probably get issues related to catastrophic cancellation around x=0. As I said earlier the easiest solution is just to measure in this case. You don't really need to fuzz a sine approximation, you can scan over one period and compare against exactly calculated tables. I would probably add a cutoff around zero and move to a linear model if there is cancellation issues.
And if the measurement shows the approximation has too much floating point error, you can always move to Kahan sums or quad precision. This comes up fairly often.
If I really had to _prove_ formally an exact error bound, that would take me some time. This is not something you would be likely to have to do unless you're building software for airplanes, or some other safety critical domain. And an LLM would absolutely not be helpful in that case. You would use formal verification methods.
"Oh, IEEE 754 double precision floating point accuracy?"
Ok, so we do agree! You DON'T want to go back to a system where everyone had to do their own arithmetic just to make a program! That's fabulous. I'm glad that we're in agreement.
It's it SO MUCH NICER to just have the vagaries of one arithmetic we've already agreed upon to deal with, instead of needing to become an expert in numerical analysis just to get along with things.
Ok. Based on your answer, you don't understand very much about computers. Maybe it makes sense that you're leaning on LLMs this early in your career. But it will bite you eventually.
Every x86 computer uses IEEE 754 floats, that's what you, the programmer, needs to be able to reason about.
You still need to understand floating point errors and catastrophic cancellation. And simple techniques to deal with that, like summing from small to big, or using Kahan sums, or limiting the range where your approximation is used. You can use a library for some of these, but then you need to know what the library is doing, and how to access these functions.
But the problem seems to be that you have a skill issue, and the LLM will only make your skill issues worse. Stop leaning on it or you'll never be able to stand on your own.
I said this situation is reminiscent of how we took computer arithmetic out of the hands of programmers in the 80s and you gave me a big lecture about how easy it was to make your own sine function which concluded in you explaining that every computer (mostly) uses IEEE floats.
No shit.
What do you think we did in the 1980s to take computer arithmetic away from working programmers? We standardized computer arithmetic so instead of needing a numerical analyst on hand you just need to read that Goldberg article you’ll run off to Google now.
You live in the land of milk and honey and you dare lecture someone about effort. You have absolutely no clue what world we left behind, but you’re happy to talk about who is and isn’t learning.
Standardization is a good thing. I never said it wasn't. You're just arguing with a strawman. Your two last posts aren't even related to the discussion at hand.
“ This shift is an analogous to how we took having to do computer arithmetic out of the hands of programmers in the 80s. There used to be a substantial part of programming that was just a computer arithmetic. Now, almost nobody does that. Nobody in this thread could build a full adder if their life depended on it or produce an accurate sin function.”
It is truly not my fault that you proceeded to lecture me for multiple posts just to reach the conclusion that I SET OUT FOR YOU: standardization of computer arithmetic is good and makes it so that someone doing math on a computer doesn’t need to become an expert on how the computer does math.
As I said when you first insinuated yourself: I don’t need your help to be an engineer or a developer, thank you. You persisted anyway and embarrassed yourself.
Standardization means you only need to become an expert in the standard. You still need to know the standard.
And to your point in the quoted part: I absolutely could, as could any of the people who I studied with (in this century).
When you add abstraction laters you do still need to understand how the underlying layers work in order to manage upper layers.
Look, I accept that I've posted more than I should about this. But it's only because you keep saying "nuh-uh". And when you start arguing in bad faith about what I've said, that should be called out.
Saying you disagree is fine, but becoming so flustered you respond dishonestly is not.
I have been saying that the shift with LLM’s is similar to the 1980s when we standardized computer arithmetic.
Prior to standardization, you had to become an expert on how the computer did arithmetic in order to do something that required arithmetic. This did not mean simply knowing an approximation for a function which you could program in a language. That is not enough as you point out that is 200 level stuff. If you wanted it to actually work on an actual machine, you would need to understand how the machine itself was actually going to undertake those operations. You had to have a numerical analyst around, or at least someone that had taken a couple of those courses.
Today you can tell me how simple it is to write a sine function, because when I press you for detail details, you can say things like well. It’ll just need to be to the standard or I’ll use a library.
In the 1970s that was not the case. Nothing about computer arithmetic was simple or unified or anything other than requiring an inordinate amount of attention paid to something that was not the object of interest. Lots of organizations that needed to get things done on computers had to hire people and train people to be experts in the arithmetic in a way that we do not have to anymore. Most people programming do not have to think about computer arithmetic in any significant fashion. If you compare this to the 1940s or the 1950s or the 1960s or the 1970s, the picture is very different. If you became a programmer in the 1960s about half of what you were learning was how to make the machine do arithmetic. Need to do a square root? well you better write that function from scratch. Does it also need to be performant? Well, then you’re in trouble.
The amount of intellectual effort, devoted to training programmers of all stripes in computer arithmetic is much less than it was 50 years ago. The fact that it is possible at all for you to boast that you could write that sine approximation and know its bounds and trust those is due to the standardization effort.
I am saying, and I have been saying that we are entering into a similar era, where there are whole categories of concerns, which are local to the machine that most users are not going to have to deal with. Some of these things will have been very central to some people’s identities, like being able to brag about sine approximations. Training is going to change; capabilities are going to change; what it means to be an engineer is going to change.
What does it "do for me"? I want to do things. I don't want a probabilistic machine I can't trust to do things.
The things that annoy me in life - tax reports, doctor appointments, sending invoices. No way in hell I am letting LLM do that! Everything else in life I enjoy.