Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Just give me a simple CPU and some I/O ports (jgc.org)
87 points by jgrahamc on Aug 5, 2009 | hide | past | favorite | 54 comments


I've been having that exact thought for quite some time now.

It seems that the art of programming - or at least, the practice of it, equivalent to "proficiency" or "competency" - has shifted from fundamentals of the machine and the language's intrinsic semantics, which allowed you to actually create something anew, to spending 90-95% of the time trying to figure out someone else's API. In other words, the real skill now is reading reference material, and trying to figure out how to fashion a bunch of opaque, prebuilt Lego blocks together.

It's just no fun that way, and it relies on very different mental faculties than the ones present in fundamentals-based programming. Writing low-level backend processes in C may have taken a lot more work for boilerplate and nontrivial data primitive/data structure support, but it was real creation - using the actual power of the language to get something done. Now it's all about figuring out how the SynchronousIOGoatSoapBubbleVectorManagerFactory interacts with the SynchronousIOGoatSoapBubbleVectorReflectorParserTransformer.


I know where you're coming from, but I wonder if this observation is just a classic language rant waiting to be born. In other words: I wonder if the problem is not that all higher-level APIs necessarily suck, but that the ones you have suck, and your language platform is too impoverished to profitably work around that.

API design is hard. It's like composing poetry, or trying to design the game of chess. ("I know, the queen should be able to jump like a knight if she's on a white square!") It is bound to take thousands of little trials and errors in order to get things right. And my suspicion is that certain languages and platforms make it especially difficult to tinker with existing APIs, or wrap them while minimizing the leaks, or replace one part of them and leave the others alone, or build them in a way that makes it easy to compose complex higher-order objects and behaviors out of little pieces. When working with those systems, the most efficient way to solve any given problem is to just slog through, trying to make lemonade out of slightly-spoiled lemons by hand-sorting and hand-squeezing each lemon. And that can be effective -- at least it's more efficient than trying to use genetic engineering to construct a lemon out of raw amino acids -- but it's not always fun.

There are high-level APIs that are a joy to use. I love jQuery, for example, which is a layer of Javascript atop more Javascript atop an absolutely scary C program atop Unix or Windows and yet somehow manages to feel lightweight and elegant and composeable. But for every elegant API there are dozens of clunky ones.

The lovely thing about low-level C or assembly is that the building blocks tend to be simple, understandable, and composeable -- and, if they aren't, it's not so much work to rewrite them. (Though, as jgc points out, in modern times even assembly is hardly immune to complexity. The idiosyncrasies of modern processor architectures are so baroque that it takes years for optimizing compilers to be refined to take advantage of them.)


jquery is a good example to use, I've only been into it for a couple of days now (after asking on HN what would be a good js library), and after a few days I already like what it does:

It hides the mess!

Most software is so absolutely messy. Just the other day there was an argument that it is perfectly ok to see the user interface that results from interpreting a spec'd document as an approximation. Computers are supposed to be deterministic, you're supposed to get exactly the results you want and not something 'good enough for government work'.

Elegance breeds excellence, bloat breeds badness. Cellphones that crash (who would have ever accepted that), computers that need to be periodically reinstalled and APIs that have manuals 10 times the size of the operating system components they interface to.

Complexity is a given, so, then we should strive to make the complex simple, instead of more complex.


I'm not saying that there aren't real, valid reasons for the state of things today to have evolved in the manner I described. And I completely agree that designing APIs well, and doing object-oriented design hierarchy well in the manner in which it was intended is a formidable problem domain. Never did I mean to imply that it wasn't. And certainly, there is a qualitative, artistic dimension to the way in which one goes about stringing various APIs together, too.

Nevertheless, the fact remains that today when I - and many others - program, we're spending 90% of our time putting together the pieces of some reference. Back when I was growing up writing code in C, there wasn't so much of that, aside from the sparse library functions and, of course, system calls. I actually felt like I was writing a lot of code; maybe unnecessary code from a contemporary point of view (lots of linked list containers, hash table implementations, etc.), but definitely code, and I was writing it. There's something in that productive bliss that's gone today, when it's basically a question of figuring out what the javadocs say.


There will always be a gap between what's technologically interesting and what's economically interesting. Web applications stopped being technologically interesting a while back. I've seen everything from in-browser HD-video to AJAX-powered e-mail clients before 2001. After the major technical challenges had been solved the industry kicked in and started mass-producing.

Today the mobile phone platform is sort of like the Web of 1999: a technological wild west. Fart Apps can still make you rich, while useful bluetooth apps hardly exist. If that's not adventurous enough for you, there's plenty more.

I spend most of my time on vehicle robotics for automated roads (C) and divide the rest over Arduino/AVG programming (C) and VMM hacking (C/IA32). You can be sure you won't find any "rich APIs" there. Also there's no chance it will make you rich, you need to be in the proven areas for that. (unless...)


There is still plenty of work to be done on impeded systems for sensors and other simple devices. Some of theses systems are powered by a watch battery for years, they might have more or less processing power than an apple II but far less ram which tends to limit code bloat.

EX: A new 8 bit CPU from 2008: The triple clock enables the selection of the most suitable clock frequency from 32.768 kHz, 500 kHz, and 4 MHz according to the processing type, thus minimizing the operating current. These power-management functions help prolong battery life and reduce product size. ... RAM 2 KB (http://www.okisemi.com/en/866/869/000576.html) And yes some people are really working on an 8 bit cpu at less than 40khz due to power limitations.


Don't confuse powerful with power-hungry


It's just physics. For a given transistor, more transistors @ higher frequency = more power. At the ultra low end people are using processors that run using energy collected from stray radio waves. While a better design and instruction set help unlike desktop computing the standard is already lean.


Still, accounting for the vast differences in process, I think a full 45nm Apple II could run on stray radio waves


I don't know if anyone makes a 45 nm 8 bit cpu. Intel makes 8 bit chips, and has 45 nm and smaller fabs, but I don't the market is there yet.

Still the original Apple II was 1Mhz, with 4kb of ram which is significantly faster and more powerful than many micro controllers in use today. For many solutions the power savings of sub 40khz speeds is still worth it so I don't think we are at the point where stray radio waves could power a system that fast.


Much of my blog is devoted to precisely this subject. (See for instance: http://www.loper-os.org/?p=16)

I believe that the only solution to the complexity plague is a from-scratch reboot of all of computing.


I believe the complexity comes inevitably with code reuse and abstraction. Once you've got your ethernet driver working, you discover that your routable point-to-point packet exchange is useful in many places, so you extract an IP component, and before long, you're writing web applications in a high-level framework in a dynamic programming language again.


I share your belief. At the same time I think it is going to be like trying to replace the automobile piston engine with something better.

The amount of money and time invested in the way we do things today is going to be a very large stumbling block to overcome in order to create a clean slate.


rotary engines are pretty cool. http://www.youtube.com/watch?v=6BCgl2uumlI

each time each piston reverses direction you're losing energy in a standard piston engine.


For some reason my brain showed me the picture of a star piston engine when you said 'rotary', my bad :)

We call those 'Wankel' engines here.

Pretty cool stuff.

Beautful animation by the way!


My opinion may be in the minority here but I think the crux of this article is misguided. It's like a farmer saying, "I long for the days before the combustion engine because I love planting 3 acres by hand".

I also think the two basic premises of the article are just plain wrong. There are plenty of examples where people have written their own operating system from scratch for the x86, there have been articles posted on HN describing as much. So to say that todays processors are "to complex to understand" is just wrong. On the assertion that programming has devolved into "learning another man's APIs" that's just a fact of engineering I suspect the Z80 had a thick manual describing all of it's interfaces and inner workings. The AVR micro controllers I've worked with, the same ones on the Ardunio the author says is still "fun", have a manual that is over 320 pages long describing in effect the processor's API.


I disagree. That would the case if I were to say: "Please take away this Mac Pro and all the wonderful software because what I really want is a Z-80".

I'm not saying that today's processors are too complex to understand, I'm saying that the software running in the machine in front of me is too complex. I wouldn't mind writing assembly code for a modern processor at all.

But if we are talking about what's likely to make me happy, it's probably a relatively simple CPU, some I/O ports and a soldering iron.

That's probably like the farmer wishing a had a kitchen garden and was growing enough food for himself.


That's precisely the feeling that drove me to free software - I wanted to regain some of the understanding of how the computer worked before Windows took it away from me.

While I like to mention my much beloved Apple II, even a 8088 PC running DOS is somewhat understandable.


My father-in-law loves to work on cars, but he can only fix simple things on a newer model car because of all the complexity. That's why he still drives his old 1960's pickup truck.

This article isn't talking about efficiency, it's talking about joy -- the basis of all hobbies.


It's quite probable that x86 processors are more complex than they need to be-- at least, the instruction set is more complicated than is necessary. CISC architectures are meant to make life easier for assembly language programmers, letting you do things like load, multiply and store in a single command. In practice that hasn't been all that valuable, because programmers just use higher level languages like C when they need that much expressive power. From a compiler perspective, it's easier (or at worst about the same) to generate code for RISC than CISC, and RISC has much more flexibility with regards to automatic optimization, pipelining, etc.


"quite probable" here is, perhaps, the biggest understatement ever presented on HN. Within every Core i7 is a Pentium M, within which is a Pentium, a 486, a 386, a 286, a 8086 and most of a 8085. The PC architecture is no better. I bet that, buried deep in the Centrino Duo notebook I use to work, there is a vestigial ISA bus that has to be primed before the text screen can be written to and the scan rate matched to an imaginary CGA monitor.


> RISC has much more flexibility with regards to automatic optimization, pipelining, etc.

Folks designing processors haven't believed that since before the Pentium was introduced. (I went to dinner with some of the MIPS principals right after the first Pentium tech talk. Their conclusion was that everyone would finally figure out that the RISC/CISC wars were over and they'd lost.)

There are lots of things that go into designing a high performance processor. Instruction decoding and its consequences have little effect/cost compared to everything else.


RISC won. Intel continues to ship with CISC style artifacts, but that is only because Intel has almost never removed a feature from its microprocessors. However, the actual processor implementation is designed with CPU microcode, which is a RISC architecture. Even when you think you are doing something "CISCy" in an Intel proc, it's translated to RISC behind the scene.


> RISC won.

Yup, MIPS and SUN are a thriving companies and Intel shut down.

> However, the actual processor implementation is designed with CPU microcode, which is a RISC architecture. Even when you think you are doing something "CISCy" in an Intel proc, it's translated to RISC behind the scene.

Do you really think that RISC machines don't have microcode? (They also have multi-cycle instructions and the like.)

The claim was that RISC ISAs had inherent advantages that would cause CISC ISAs to be uncompetitive. That claiim was wrong.


When pentium was introduced? Yeah, it looked like RISC had lost. But that was not because it wasn't a better design, but because Intel had a lot invested in the x86 and that's the processor that was used in the PC.

Now that mobile devices are becoming as important or more important than PCs, the field has changed. Also, it's possible that RISC's parallel processing advantages will matter given the current trend toward multiple processors and multiple cores.


> RISC's parallel processing advantages will matter given the current trend toward multiple processors and multiple cores.

What are these "parallel processing advantages"?


Tell that to the people at ARM who ship 10 RISC cpus for every Intel desktop chip out there.


If we're counting CPUs shipped, ARM is in the noise compared to 6502 and the like, let alone the 8051s.

The claim was that RISC had certain advantages that would have significant performance or cost consequences. That hasn't happened. ARM wins where it wins for reasons that have nothing to do with RISC/CISC.


Thank you for that, that echoed my own thoughts better than I could have ever put them into words.

My frustration with the amazing amounts of bloat that you have to deal with in order to do the simple stuff knows no boundary.

A PIC chip has more power than the machines that made Apollo 11 possible, I wonder if with todays technology we'd be tempted to go for some 'high tech' solution and mess it up because of that.

In the mid 80's I worked for a dutch artist on a project called 'SonoMatrix', a room full of speakers with a bunch of computer controlled tape recorders, amplifiers and channel switches attached to it.

The whole thing ran of a beeb, the user & printer port. We designed the hardware, wrote the software (both the controller in 6502 assembler and the user interface) and built the whole thing.

If I had to do that today I wouldn't even know where to begin...


You'd get an old PC from your closet, install Linux (text only) and gcc and get the whole thing working in less time and for less cost, probably.

...at least that's what I'd do :-)


CPU built from 74 series TTL chips running web server: http://www.homebrewcpu.com/


you ought to post that invidually, that's a really neat hack.



I think a big factor in that is when something gets posted and whether or not it gets traction immediately. If that doesn't happen it will be gone before someone notices.

I completely missed it yesterday. The 'new' page gets filled up so fast sometimes it's not even funny. And that's when ignoring the spam. One thing that would help here is a minimum delay before the same user can post another link.

Your comment here got more 'points' than the original posting.



The fact that a single person can understand a smaller percentage of the whole of a computer system indicates progress.

There was a time when any given physicist probably had a good grasp on most of their field. I doubt that's still the case. I do sympathize with the romantic notion of knowing it all, but let's not confuse this notion with a call to action.


I think the completely comprehensible system has always been an illusion. There's always some cutoff level below which people don't understand (or don't care to understand) the system they're working in.

The old Apple 2 hackers had a great understanding of assembly, but that's because it was the top level of the system to them, the stuff they worked with daily. I doubt most of them understood the PLA that decoded the instructions, or the behavior of the dynamic NMOS logic inside the 6502.

The 6502 instruction set was essentially the API of the processor, and it wasn't above reproach any more than current software APIs are. Many people wished for different addressing modes or additional instructions to simplify common coding tasks.


However, the relation between how much is there to know (increasing) and how much fits in a head (constant) is growing, which is sad, and whether it has ever been 1 seems rather unimportant to me.


The fact that the ratio continues to grow seems like a fundamental property of technology -- personally I don't find it sad. In exchange for more powerful tools, you inevitably need to accept more levels of abstraction and more underlying complexity.


The ratio of things we know to things we don't know is essentially zero. Thankfully that doesn't preclude us from dreaming big and accomplishing great things.


I can no longer understand the computer I am forced to spend my days in the lonely struggle against an implacable and yet deterministic foe: another man's APIs.

Having worked as an electrical engineer creating hardware, this seems strange. Obviously the computer designer had to create an API of some sort, even if they did it in transistors. Computers aren't given to us from above -- there are people creating them too.


There is a fundamental difference between a hardware "API" and a software one. See http://www.loper-os.org/?p=37


It's not a fundamental difference, it's a consequence of the fact that hardware people are at the bottom of a very big stack and have a massive financial incentive to be as solid and predictable as possible. Higher up the stack everyone prefers to use relatively cheap programmers and build stuff quickly.

The problem is not having to deal with software APIs, the problem is the sheer size of the stack and the sheer number of accumulated assumptions that are built into it. Moving more pieces into hardware might improve the stack's overall integrity and reduce bugs, but it won't do much to reduce the size.

The real issue, IMHO, is that no one wants to admit that the general reuse problem is hideously, horrifyingly difficult. The biggest problems it causes are diffuse and long term, and in the short term everyone can do things faster by hacking together their old code with someone else's 95% solution library, so that's what everyone does. Putting enough thought into each new application to really do it right tends to be hard to justify on a business level, and most programmers have neither the inclination nor the skill to do it anyway. It's so ingrained that even people who are frustrated with the way things are think that a different operating system or language would solve the problem. It wouldn't - it would only start the process again, with at best a percentage reduction in stack size and corresponding percentage increase in time to frustration. I think it boils down to the fact that code reuse is basically a 2^n problem, and the bigger and more opaque the stack gets the harder it is to cheat that 2^n.

The only potential solution I've seen is what Chuck Moore is doing with Forth chips. He's now at the point where he can design and manufacture chips that are cheap and simple in design but are very good at running Forth very quickly. Of course the tradeoffs are (perhaps necessarily) as horrifying as the reuse problem in that it demands a lot more from programmers in general, and in particular requires them to learn a radically different way of doing things than they are used to while at the same time strongly discouraging reuse at the binary and source levels. In other words, he's spent decades designing a small core language and hardware to run it, and that's really all you should be reusing (along with data transfer protocols). Needless to say, no desktop or web or server programmer (or said programmer's boss or company) is ever going to go for this unless problems with reuse become far worse than they are now. (Even then the supply of cheap programmers might grow fast enough to keep masking the problem for a long time.) Most programmers are not very good, managers like it that way, and most of the smarter programmers are nibbling around the edges or looking for a silver bullet.

In short, there are no easy solutions. If you don't like the direction software is going, think about becoming an embedded systems programmer.


I've felt exactly the same way in the last few years.

A complete understanding of the whole system in every detail is not so necessary for me. I just want to focus on the things I'm trying to accomplish without the need to constantly lookup API documentations and writing glue code.

That's why (for my own projects) I always end up making my own tools and coding almost all the stuff I need by myself.

But our culture is going in the opposite direction (http://www.wisdomandwonder.com/link/2110/why-mit-switched-fr...).


"Don was responsible for the LM P60's (Lunar Descent), while I was responsible for the LM P40's (which were) all other LM powered flight". Two men were able to write all that code and understand its operation.

Is this probably why we don't use more advanced technology for space flight today; it's too complex? I've always wondered just how much more we could accomplish if we used modern computing technology in space shuttles, but if safety is of the utmost important, maybe the complexity is a bad thing?


I like to trace a distinction between advanced technology and complicated technology. A 45nm AGC would be more advanced. An AGC running WindowsCE would just be more complicated.


Love the article and like many others here enjoy the simple. I still have the pleasure of coding in C at work and my fun work is currently writting an app for the Nintendo DS. Truly simple system.

While back I was interested in designing/constructing simple 4-bit cpu, here is one that was actually completed: http://www.vttoth.com/vicproc.htm


Off-topic, but someone mentioned Forth chips in a comment here, and I have read about Lisp machines, and C is often referred to as high-level assembly language - has anyone tried making a computer that actually runs C or a strong subset of it as its actual assembler?


There's always the language Forth.


That was my first thought too when I read the title. Chuck Moore's site is a good place to start - http://colorforth.org/


I can definitely relate. I'm building a monome(.org) clone with an Arduino and having a blast.


I've felt this way for several years -- we've went from engineering to something more like witch doctor.

The problem is, of course, that all abstractions are leaky, and we've piled abstraction after abstraction on top of the hardware. So it's not unusual to have four or five levels of stuff between you and the machine. Adding to that is the problem of multi-cores: it's not just one machine operating anymore. Each piece is deterministic, sure, but as all the pieces interoperate in real-time that determinism can be so hidden as to be effectively non-existent.

Languages are going to be able to help to some degree, but at the end of the day machines are just going to keep getting more and more complex and our understanding of them shallower and shallower.

It'll be interesting to see if there is a major hardware refactoring that takes place anytime soon. I imagine for AI to work we're going to need it.


beagleboard?


One thing that I have found that helps with exactly this situation, and I have suffered it as well over the years, is to move your focus from one end of the spectrum to the other. That is, if you have to spend your daylight hours grokking another mans API, then spend a few hours in the evening, or during off-time, hacking on your own embedded project with the Arduino. I've got tons of projects around, all of them slowly making progress admittedly, with the purpose of getting me out of my funk .. I don't actually code my hobby projects for any other reason than to make my professional work (embedded systems for safety critical applications) a little more enjoyable.

It sure is fun to dive into Android, for example, and get some context, and then another week spend some time with the AVR compiler.. then the Beagleboard, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: