Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Plan 9: The way the future was (catb.org)
197 points by niyazpk on Feb 1, 2012 | hide | past | favorite | 82 comments


Unsurprisingly given the common pedigree, a number of ideas from Plan 9 have made their way into Go. E.g., compare http://golang.org/pkg/net/#Dial and http://plan9.bell-labs.com/magic/man2html/2/dial.

Go's approach to concurrency is familiar in the Plan 9 world from Alef (http://en.wikipedia.org/wiki/Alef_(programming_language), http://swtch.com/~rsc/thread/alef.pdf) and Limbo (http://www.vitanuova.com/inferno/papers/limbo.html), the language used to implement Inferno, a Plan 9 spinoff.


Cool. I wasn't sure if it was common knowledge that Ken Thompson and Rob Pike created Go. The Go compiler toolchain is from Plan 9, too: http://golang.org/cmd/.


Am I the only one who doesn't get excited about the idea of absolutely everything being a file?

People seem to like the idea of being able to say things like "cat file.wav > /dev/dsp". But a sound card is a complex thing that can be configured in umpteen different ways; what if the file has a different sample rate than the card is currently running at? Even if you gerry-rigged a kernel driver that could handle this case (by, for example, noticing when the file had been open()'d and expecting the next bytes to be a WAV header, which it uses to reconfigure the card), you're left with something that doesn't have even the most basic functionality that you'd expect from a sound player; for example, "pause."

So what has representing /dev/dsp as a file really bought you? It doesn't help you know what devices are available, because there's tons of crap in /dev that doesn't necessarily have a kernel driver loaded for it. It's not that useful as an end-user interface for selecting a sound card, because users don't want to choose between /dev/dsp and /dev/dsp2, they want to see nice names like "Internal Speaker" or "Apogee Duet."

Files give you a simple API for lots of common operations, which is nice, but the fact that all files are not created equal means that operations sometimes don't work like you expect. For example, how would you guess that "tail -f" is implemented? If you're like me, you would have expected that it works by calling poll/select/etc. and waiting for the file to become read-ready. But this approach doesn't actually work for monitoring growing files; the select() will just return immediately with read-ready even if you're at EOF. So what "tail -f" actually does is a loop that alternates between sleep() and fstat(). I'm serious! http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...


Some files are different. So? Do you have the same qualms with stdin/stdout not being seekable because they are often connected to a terminal?

The "everything is a file" abstraction is mostly helpful and rarely gets in the way in practice; I don't know how plan9 exposes sound, but it is probably a directory, with multiple files - one you write to to set sampling rate; another gets data, and the third is stop/go/cancel.

The most useful feature, though, is network transparency that really works. You don't need Vnc/rdp; just mount the remote display in your file system. You don't need special ip forwarding - just mount the remote network in your local file system. You don't need a network aware server for anything - it all is.

That's the reason to get excited.


You could configure it by doing something like

  echo "22050" > /dev/dsp/config/sample_rate_hz
  echo "lanczos" > /dev/dsp/config/resample_algorithm
You could then search for configs by using find, back them up by using tar, be able to easily replicate settings between computers using rsync etc.

The idea is not to have one unconfigurable binary reciever, but rather to expose the interface as a hierarchy of rw files instead of a number of C functions, because the C functions have no easy shell representation, and most command-line tools fail using them.

It's very reminiscent of REST.


> The idea is not to have one unconfigurable binary reciever,

It _is_ configurable, why do you think it is not?

This idea that somehow one can fit any interaction with hardware into a filesystem ends when hardware becomes complex enough to simply NOT fit (like modern GPU).

How do you guarantee atomicity when saving configuration with tar(1)?

Linux recently deprecated textual conntracking data interface in favour of shipping binary data over netlink socket. Reason -- read() speed in real life high load scenarios.

What could Plan9 do here?


I actually tried to point out that configurability _is_ very possible with such an API.

Your complexity argument does not hold. If I can control something with a C API, why not with a file API?

Take atomicity as an example. C has no inherent concept of atomicity. Yet you can acheive atomicity by using C. So why not with a REST or file-based interface?

You also mention speed. Interesting since complexity and performance worries are the two roots to premature optimization if you ask me. Since this solution is still on a concept level I am of the opinion that one should avoid such discussions at this stage. But I see no _inherent_ speed limitations in a file-based API. Send binary data if you want. Text is only for readability.


> Take atomicity as an example. C has no inherent concept of atomicity. Yet you can acheive atomicity by using C. So why not with a REST or file-based interface?

You achieve atomicity because inside kernel handler of relevant imaginary ioctl(2) which dumps all hardware configuration there is synchronization primitive which provides necessary guarantees, so this imaginary ioctl(HW_GET_STATE) writes coherent state to userspace.

If you merely expose individual hw config into flat files, POSIX directory semantics does not guarantee you atomicity. So, you have to invent something instead, like additional file which kinda sorta provides locking. But if you invented it, you have implicitly mandated that ALL users must be nice and go through locking API/file, whatever. At this point, nothing prevents rogue (and more importantly) buggy userspace from messing with well-behaving users.

At this point, you've failed to provide something another developer is asking you to provide, namely, a guarantee that hw state will be coherent.


Regarding speed and knuth argument.

At this stage real world examples show that once scale of the state becomes sufficiently large (large number of conntracks on big firewalls), text interface always lose.

So, play with toy flat text file interface if you want, but you might as well do it right from the very beginning.

Right now, /proc/stat slowness is being discussed in linux-kernel. Part of the problem: heuristics determining how many pages needs to be allocated for buffer containing text info sucks because integers written in decimal are variable sized. Kernel first allocate 1 page, only to fill it and see that 1 page is not enough for big enough machines (NR_CPUS * NR_INTERRUPTS)

2 patches are proposed: first is to allocate 2 pages from the beginning, second is (no kidding) to print decimals faster (printf "%u" can be made faster since it's known that "unsigned int" is going to be dumped and C has pathetic support for compile-time evaluation).

But the correct from every angle (except existing /proc/stat users) answer is obvious: dump BINARY info already into userspace buffer which will be appropriately sized because it's easy to multiply 3 integers). Simply ship information to who is asking, without print bullshit.

Real programming languages and environments should easily eat result (Python's struct.unpack springs to mind).

Excuses for programming languages will not.

Can you transform /proc/bus/pci/00/00.0 content into one-dimensional array of bytes with you favourite PL? And if you can't, whose problem is it?

And yet another example, Linux USB bus sniffer kernel module (don't remember exact name/config option) gained binary interface deprecating text one.

As for _inherent_ speed limitations, read(2) does memcpy(), so you have to provide mmap(2) for your file (most virtual Linux filesystems doesn't do it for majority of files).


This idea that somehow one can fit any interaction with hardware into a filesystem ends when hardware becomes complex enough to simply NOT fit (like modern GPU).

The idea that one can fit any interaction with hardware into the mere toggling of registers and stuffing of byte buffers...

I don't know what FS you're using, but any good modern filesystem (say, ZFS or something similar) should prove more than adequate for handling that many/that size nodes.

Everything else can probably be fixed in the kernel level--for example, duplicating the hardware directory prior to tar'ing it, and have the kernel/FS note what actions to do to guarantee atomicity. I mean, we do the same thing with sharing /dev/dsp anyways (I believe daemons can be set up to handles this), so what's the deal?


We're discussing anything that can do open(), read, write(), provides files and directories. Do you understand that virtual filesystems do not have problems with inode counts?

Dupping a directory means a) hw state is saved on open(), in which case you're holding it unnecessary because read() simply may not happen b) hw state is saved on first read(). OK, what to do with parallel dumpers? How to determine when transaction starts and ends? What to do if dumper process is sigkilled?

ioctl(HW_GET_STATE) doesn't have these problems, kernel knows how to kill process inside itself.

Try to save /sys/class/net/lo/* coherently and write down all assumptions which you did.


That API seems so clunky. What if I have two different programs, with different sample rates, which I want to play back at the same time?


Each program sees its own copy of /dev/dsp. Behind the scenes, that copy is backed by some magic on the OS's part that handles state tracking, mixing, etc.

The program only ever sees a dumb bit bucket with some flags on how to handle it--that's why we have OSs, after all: to avoid having to write hardware driver code for all our programs (see the bad old days of DOS game programming).

The program still needs to manage sound mixing in userland to fill that buffer. That's nothing new, however, and you can use PortAudio, FMOD, XACT, whatever to do that mixing for you, or roll your own.

This abstraction would greatly simplify that process.


Thanks to you and others who pointed out that plan9 exposes private copies to each program. I did not know that, and it would solve the issue I wondered about.

But that doesn't solve the underlying issue. If my program uses two libraries, both of which play sounds, then how does each library get its own configuration filesystem? Is there a way to create a new view, so that the two libraries can be disconnected from what the main program needs?


I'm not an expert in plan9 stuff but I think each process sees its own version of some parts of the file system (IIRC mounts are also process specific), so, maybe those commands would only apply to the current process.


Each process gets it's own namespace in plan9.


The same things that happens now with any regular sound API.

There are calls (similar to the above example) to just play a sound, and there are different calls to schedule multiple sounds to play simultaneously.


The advantage of "everything-is-a-file" is that every process gains thread-safe access to some API without having to pull in the headers, link, and manage a foreign memory management regimen.

But the real boon is that each process can conceivably be linked with any 'file'. I liken this practice to the STL library's iterator pattern: algorithms are the processes, and files are the iterators (although I bet STL was inspired by unix). You can redirect your tty through an ssh tunnel, you can scp a regular file, or you can ssh tunnel your soundcard or mouse. And of course 'ssh tunnel' is only one such kind of glue. There are pipes, devices, fifos, etc. In the end it works out that there are O(n*m) possibilities for only O(n+m) implementation cost.

But remember that with STL, not all algorithms will work with all iterators, and vice versa. And likewise, not all processes can work with all files, and vice versa. The benefit comes from the significant overlapping portion.


You can do cat file.au >/dev/audio on a lot of Unix-y OS's. For wav, the problems you mention are solved with "sox" which converts the wav file to raw data with suitable options.

All of the problems you mention are problems of the interfaces exposed by the files, not problems with the "everything as a file" idea in itself.

Look at /proc on linux. /proc/[pid] represents a running process, but it's a directory, not a file, all the information about that process are inside.

A hypothetical filesystem API for audio designed to be friendly might expose a /proc/sound/[device]/name file containing a user friendly description and a /proc/sound/[device]/output file that a converter gets mounted at that you can just copy files to, and a /proc/sound/[device]/control that accepts commands like "pause" or "play".

You also seem to also be assuming that these interfaces are meant for ordinary end-users. While some of the functionality does make it easier for end-users to interact directly with the lower levels of the system, nothing precludes friendlier interfaces being put on top of it.

What it gives you is an interface that has a bunch of standard tools you can use to operate on them, either because they're simple enough to use directly, or to make building the user friendly interfaces more easily, and that which is at the same time reasonably explorable if it's well designed. /proc is a good example of this - you can poke around in it and learn a lot about your system with just a standard shell without having the faintest idea about how it's structured initially. Meanwhile, the non-file-related system calls require you to write code to test them out - they're far more opaque.

> But this approach doesn't actually work for monitoring growing files; the select() will just return immediately with read-ready even if you're at EOF.

This really shouldn't be surprising. select() specifically tells you if the filedescriptor is read for read. A filedescriptor is ready to read from if there is more data or you're at the end of the stream. The behavior is entirely consistent. Perhaps a way of requesting notification only if there is more data to read, rather than if there's more data or end of the stream, would be useful, but that is not what select() provides. It illustrates some of the flaws in the Unix system calls, not an inherent problem with a filesystem interface.


>Am I the only one who doesn't get excited about the idea of absolutely everything being a file?

Don't think of it as everything being a file. Think of it as accessing everything including file data through an RPC system.


It's been a while since I looked at 9P. Is it really that much like an RPC system?


It lets you name resources and open communications channels to them, yes. It even preserves message boundaries, as I recall.



I'm not sure that really contradicts his point, that's a terribly limited API. Stereo only, no way to specify channel layout, 16bit only, no way to change the device buffer sizes, no way to tell if the device is in use, no way to tell the device latency, seemingly no way to examine what sample rates or bit depths the device supports, seemingly no way to subscribe to any kind of notification when the device has actually changed sample rate (or other properties) or when playback has stopped etc, no ability to lock the device to prevent other processes changing the sample rate or other properties, no ability to get device time or determine if the device is synchronised with system time.

Some of these missing things can be added fairly easily to the existing API of course, but I don't know enough about plan9 to suggest how, for example, notifications for device property changes could be added.

As it stands this interface is almost useless even for casual home users who just want to play back audio: multiple channel support is a basic requirement these days. Serious audio work is not even possible at all.


That calls for an obscure ioctl with an impossible name. Simply use SNDGTDVCLTNC to query the device's latency. Like the actual TIOCSLCKTRMIOS which can be used to lock the termios: http://www.kernel.org/doc/man-pages/online/pages/man4/tty_io...


Were the sound APIs from other operating systems of that era any less limited? Presumably you could reproduce a modern API just as easily with a filesystem, using the same general schema.


The thing to keep in mind is that this was written for a SoundBlaster back in the early 90s. I remember being really impressed that Commander Keen 4 could use the SoundBlaster 16 in my 486 to make decent music at all.

As for notifications of device property changes, you would probably get them by reading from the various control and status files. Although I use Plan 9 at home and at work, I have never really done anything with audio, I'm afraid.


> that's a terribly limited API.

Plan 9's users consider that a feature. Standing boast: "GNU's compiler manual is bigger than our whole system!"

> I don't know enough about plan9 to suggest how, for example, notifications for device property changes could be added.

You'd add a virtual file named "ctl". Processes could open it and call read, which would block until something changed, then return with the notification.

Those processes will need threads that work. Plan 9's do.


> As it stands this interface is almost useless even for casual home users who just want to play back audio: multiple channel support is a basic requirement these days.

Really? I must therefore, be somehow a less than casual music listener - I am perfectly happy with my stereo.


I wasn't suggesting that all users require it, merely that a non-trivial number do; more for watching video with multichannel soundtracks than listening to music.


The HUGE advantage of "everything is a file" is that if you make a tool that works on one "thing" and then want to use it on another "thing" it is quit possible that it will just work. For instance, on OSX, one of the things I sorely miss is the sysfs from Linux. Sure, OSX has the ioreg, but I cannot use the tools I am used to (find, grep, cat, echo, sed, awk, ...).

To put it in terms the web programmers can understand, it is like REST, you have a GET (read), and a POST (write), and I suppose a DELETE (unlink), and with that you can implement all you need. Everything fits into a very few syscalls (open, read, write, close).


> But a sound card is a complex thing that can be configured in umpteen different ways; what if the file has a different sample rate than the card is currently running at?

That's what ioctl does: http://www.manpagez.com/man/2/ioctl/


Sure, you can use an ioctl for this. You could use some combination of ioctl()s to tunnel absolutely any API, just like you can wrap any text string in pointy brackets and call it XML.

The question is whether this is useful. In particular, once your interface requires ioctl(), you lose the benefit of easy interoperability with UNIX command-line tools.


Well, just below the Plan 9 sections comes this: http://catb.org/~esr/writings/taoup/html/ch20s03.html#id3016... And he's right, ioctl's can be quite tricky to operate sometimes.


The impression I got was that the guys behind Unix were annoyed when their operating-systems research was stalled in the name of "API compatibility", so they started Plan 9 and actively resisted commercialisation for as long as possible (for example, they gave it the least marketable name they could think of).

There's so many interesting ideas in Plan 9, I'm occasionally tempted to install it and try it out... but then I read things like the list of keyboard shortcuts in the standard editor acme¹, and I get cold feet.

¹: http://acme.cat-v.org/keyboard - in particular, note that there are no keyboard shortcuts for 'move the cursor to the next/previous line'. Think about that.


The problem wasn't that the Plan 9 guys opposed commercialization, quite the contrary, the problem was that AT&T never let the system become open source. For some time it wasn't even available outside universities. That killed it.

Yes, acme doesn't have keyboard shortcuts, I use it every day.


>the problem was that AT&T never let the system become open source.

The whole thing except the fonts was put under a very liberal open-source licence 10 or 12 or so years ago. (By then it was owned by Bellcore or Lucent, not AT&T.) Before then there were severe restrictions; is your opinion that by time the severe restrictions were lifted, it was too late because Linux had already become entrenched?


Yes, it was made open source in 2000, 8 years after the first release of 1992, far too late.


Regarding acme, everyone comes in and goes "Yuck, you mean I can't type 'C-c M-x undoify'?" However, once you get used to it, it's actually rather nice. I have something like 30-40 files open in acme right now, and I find it far, far easier to manage open buffers than in Emacs. Arranging and re-arranging buffers (to use the emacs term, since it's familiar) is very convenient and helps me keep things organized.

It also has "mouse shortcuts" (chords) that tend to reward using the mouse more--meaning you don't need to switch from mouse to keyboard as often, because you can do cut, copy, paste, search, command execution, etc. all with a click of the mouse (and no hunting through menus, either).

It's hard to explain... you really have to use Acme, or even better, watch someone use it. http://www.youtube.com/watch?v=dopu3ZtdCsg seems to be a decent intro, although I haven't had time to watch the whole thing yet.


C-c M-x undoify ?

In vim it's just: u

Much simpler and quicker than stretching your arm to the mouse, futzing around with the cursor to find god knows what menu option, clicking on it, and moving your hand back to the keyboard.

Having 30-40 files open in an editor is not impressive either. You can easily have hundreds of files open in vim, and navigate between them without a problem.

The mouse chords might make the mouse more functional, but it's certainly a far cry from having over 100 keys and thousands of key combinations at your fingertips.

Not that using the mouse excludes the possibility of using the keyboard, but as a vim user I almost never have need to use the mouse -- except when I've switched to vim from another application that requires mouse use and want to paste something. After that, my hand goes right back to the keyboard, because using the keyboard is just far more efficient.

Acme's forcing mouse use on the user is one reason I am really not very interested in using it. However, I am interested in seeing if there is anything it can do that doesn't absolutely require mouse use that could be integrated in to vim.


If AT&T could successfully market an OS with a name that sounds like "eunuchs", they weren't going to be dissuaded by something innocuous like Plan 9.

And indeed, Plan 9 is conceptually cool as balls, but Acme and Sam make me want to chuck expensive electronic equipment out the window.


First, one should note that this is a chapter (or section) of "The Art of Unix Programming", by esr; it is a fantastic book (I'm going through it now, on chapter 9 of 20, and the wisdom has been near-palpable).

As for the specifics of the article. Plan 9 did indeed do many things right; in fact, many things are done in a very Unixy way. For example, the networking stack is build precisely where it ought to be: as a detail below a powerful network protocol (9P) which can be made as transparent or concrete as necessary (one can easily write individual TCP or UDP packets using the file system; just as easily, you can network-export any file system).

Similarly, Plan 9's handling of security, in particular the elimination of the superuser, was fantastic. And while I in particular disagree with the approach Plan 9 takes to the GUI, it is certainly a consistent, powerful, and inspired model. And a variety of its other ideas, such as UTF-8 and the /proc and /sys file systems, have worked their way into modern Unixes such as Linux.

Now, Plan 9's failure has been aptly explained elsewhere, in particular in the article linked. But, being somewhat old (2004, I believe; though when that particular section was written I do not know), it makes the prediction that Unixes will eventually come to absorb all of the Plan 9 features. Instead, it looks like the Linux kernel, at least, has mostly given up on ever implementing union mounts (one of the most inspired of Plan 9's innovations, at least in my view). And while the /net file system is incredibly useful, its power is much diminished when you can't assume that the world speaks 9P, which will seemingly never be the case. And a variety of Plan 9's choices are impossible without ignoring POSIX compatibility, which causes problems of its own.

Unix rode the growth of servers in the seventies and eighties, and is by now rather entrenched (see the History chapter in Art of Unix Programming); much like the WIMP GUI metaphor rode the personal computing wave. I think both of these areas are by now too old and entrenched for us to simply replace the current monopolies there. And I've come to terms with the fact that Plan 9 will never, ever become the standard OS --- it is simply not designed for any battlegrounds except those it has already lost in. But I strongly urge anyone designing an OS (or, really, any sort of all-encompassing computer system) to look over the innovations and ideas of Plan 9. Luckily, I don't think anything like Windows or Unix will win in a totally new environment, because Windows and Unix aren't made for any environments but their own. So while Plan 9 never won, I think its ideas have a good chance of living on. And I await an exciting time of innovation in Operating Systems in the coming century.


    > And I've come to terms with the fact that Plan 9 will
    > never, ever become the standard OS --- it is simply
    > not designed for any battlegrounds except those it
    > has already lost in
Why is 9p cool? Are there any gaps - for example - lack of good support from scripting languages?

Have there been any serious attempts at creating a plan 9 rackspace or cloud service? Maybe you could get a hobbyist community grown around plan 9 + 9p.

I haven't looked at plan9 for ages, but remember there was some complexity about setting it up for ssh access.

Update: fgudin pointed me to this for hosting http://sdf.org/?join


9P is cool, but 9P is not Plan 9. 9P is the Plan 9 filesystem protocol.

Plan 9 lacks emacs, bash, C++, and a full-featured web browser; this alone is enough to make a lot of people post once on the mailing list bitching about it, then never come back. It does have vim now, although its use is not encouraged because we have other editors.

If you come at Plan 9 as though it's "just like Unix", you're going to have a bad time. Expecting to access it via ssh is one part of that--yes, we have an old ssh server, and yes we now have ssh v2 client support, but to access a Plan 9 system you want to use something like drawterm on Linux/Windows/Mac or cpu from another Plan 9 system.

I'm on sdf but have not played with my Plan 9 instance much. If you want to start off experimenting, I'd recommend just firing up VMware or Virtualbox instead. If you decide you want a physical Plan 9 system, well, I've found that it runs pretty well on every Thinkpad I've ever tried.

If you want more info, you can ask here or email me (check my profile).


> Plan 9 lacks emacs

That's a feature: http://plan9.bell-labs.com/magic/man2html/1/emacs


Are there reasons why it would be hard to port Unix software to it? Could something like Cygwin be developed to help make it easier to run Unix-ish software on it?


It comes with a library called APE (see [1]), which provides most POSIX-y things implemented in terms of Plan 9 native API:s. (For example, BSD socket operations are just functions wrapping the normal file reads and writes that you would do in native Plan 9 code.)

Some things do not work, though (permissions models are a bit different, and chroot is entirely unimplemented, for instance).

1: http://doc.cat-v.org/plan_9/4th_edition/papers/ape


That shouldn't prevent things like scripting languages or web browsers to be ported.


The difficult thing is getting it ported correctly. As several people have shown, it's easy to get Go running. However, getting Go running properly in such a way that you can continue to get updates from upstream is much more difficult. So you end up with ancient versions of gmake, gcc, python, mercurial, Go, etc., all mostly-functional but no longer getting bugfixes, because it's much more difficult to make a nice clean port that can be accepted into the upstream.

Oh, and given that the major web browsers all seem to be written in C++, and we don't have a C++ compiler... that's problematic.


Right. Even X11 got ported at some point, so there is not really a reason why it wouldn't work other than lack of interest.


I'm still rocking Plan9.


Where Plan 9 (and Inferno) live on most visibly is in the non-OS space:

- FUSE for file systems that are generated programmatically or exportable over a network. [http://code.google.com/p/macfuse/]

- The approach to concurrency and network programming in the Go language. [http://golang.org/]

Both of which show that problems that we used to leave to the OS can be elegantly solved in userland, which is a good thing.


The real shame is that inferno never became android. It was around early enough. Had the right feature set. I guess it was not in Google's backyard (America) and had lost it's backbone a little to early.

I guess when Google was shopping for mobile OS they were also looking for a talent grab. With Inferno by 2005 most had already left.


True, but, they already had all the talent from Plan 9


By 2005 most of the team that created Plan 9 and Inferno at Bell Labs were already working at Google.

But they were working on other stuff, like Go.


Plan 9 was and is too good to be ignored. Indeed many of its features are now a must in Linux systems (for example good support for private namespaces) and the p9fs, designed to be virtualizable and distributed, is getting serious traction in the virtualization world (see 9p virtio).

And Plan 9 also has the best mascot around!


It helps that Eric Van Hensbergen, the guy behind 9p virtio, was a Plan 9 developer.


s/was/is/

He was one of the guys that ported Plan 9 to the Blue Gene supercomputer. He has many active projects in the Plan 9 community.


I've often thought that Plan 9's everything-is-a-file system with a consistent API for access is the missing link between Unix's everything-is-a-file and REST's consistent interface constraint. I think it's certainly arguable that Plan 9 shares many features with RESTful architectures.


This need updating. "The Room", a film that came after the article was written, is obviously worse than either "Manos" or "Plan 9", as anyone who saw even its first minutes will readily attest.

edit: I'm sorry. I often forget my colleagues are humourless at this time in the morning.


I think "Birdemic" would be more appropriate, since the first half is about someone pitching a tech startup (and the second half is about being vomited on by animated GIFs of birds).


I can't disagree with facts. According to IMDB, http://www.imdb.com/title/tt1316037/ is worse than http://www.imdb.com/title/tt0368226/


Plan 9 wasn't a failure and it wasn't the future, directly.

It was a research OS that was never even close to being positioned to market, be it consumers, servers, etc -- although technology coming out of Plan 9 did make it into some commercial applications.

Indirectly, Plan 9 technologies (or inspired successors) made their way into real products. Linux's procfs is one notable example.


It seems that (almost?) everything that was planned out for plan9 is finding its way into operating systems.

The ideas were good, but somehow they had to take the long way around. It was way ahead of its time.


I remember an interview with one of the ex-Plan9 people now at Google (Rob Pike maybe?) talking about how he's switched to Linux, since that's what Google runs on, and finds it a strange experience, as if a bunch of bugs you thought you'd fixed 20 years ago have resurfaced in the main branch. But Linux does seem to be picking up a steady number of the features.


It was Rob Pike from the slashdot interview. http://interviews.slashdot.org/story/04/10/18/1153211/rob-pi...


Thanks for the link. (irrelevant: I like how he used wOOd to represent OO programming)


I always find it pretty sad that while everybody speaks about the integrated environments of Smalltalk and Plan 9, Niklaus Wirth's Oberon is almost left forgotten…


One of the Plan 9 papers actually refers to Oberon as an example of a similarly integrated system. I believe it's the one titled 'The Use of Name Spaces in Plan 9,' but I'm not certain.

Unfortunately, the server is down right now so I'm unable to check but, if you're curious, the various papers can be found here: http://plan9.bell-labs.com/sys/doc/


Probably the acme paper. Oberon definitely had the mouse chording, not sure if there's previous art in that regard.


For over a decade, dozens of actual people used Plan 9 as their main environment. (Many but not all of these people worked at Bell Labs.) I tend to believe that Smalltalk has also seen serious numbers of actual users. In contrast, who has ever used Oberon as their main software environment except perhaps students required to do so to pass some class?


If we're talking about popularity contests here, all the "contestants" aren't doing very well…


Smalltalk is a stealth weapon in the banking industry. I know Wells Fargo has a codebase based on it. I know this because roundabout 2008 I was pestered by recruiters looking for Smalltalk devs willing to move to cold, cold Minnesota. (Boston winters are more than enough for me so I bade them a courteous no-thank-you.)

Also, most of the currently trending methodology buzzwords (Extreme Programming, TDD, refactoring, etc.) emerged out of a community of Smalltalk programmers on real professional software projects.

What Smalltalk lacks is marketing hype.


    > There is a lesson here for ambitious system architects: the 
    > most dangerous enemy of a better solution is an existing
    > codebase that is just good enough.
Isn't this concept applicable to every system once it incorporates programmability?


Ugly now beats beautiful later.

An elegant solution requires a deep understanding of a problem, and that understanding takes a lot of data (awareness of cases), and then a lot of time and effort to create a simple theory of it and to create a simple solution embodying that theory.


The trick is to build ugly systems that are able to grow into beautiful ones.


That is quite a trick. The problem is that pretty soon, everyone depends on the behavior of the ugly warts in your original system, so now you're stuck with them lest you break the holy Compatibility.


I recently tried plan 9 for a bit. Though i understand all the cool things it does and agree with a lot of it. One thing really disappointed and struck me as totally brain dead: there is no such thing as terminal emulator or it seems something like text only mode. In general it seems to me that achieving keyboard only control (that is comfortable to use) is not really possible at least for now.


A good introduction to the Plan 9 UI I found, for any who are interested: http://vimeo.com/7748726


"Those who don't understand UNIX are condemned to reinvent it. Poorly." – Henry Spencer


Except that Plan 9 was designed by the people who built UNIX. Are you implying that the people who built UNIX don't understand it?


No, no! Quite the reverse.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: