Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Spur – RISC IV: The Lisp Multiprocessor Workstation (thechipletter.substack.com)
135 points by oumua_don17 on June 18, 2024 | hide | past | favorite | 91 comments


"From the perspective of 2024 though, I think that the most appropriate reaction is to marvel at the ambition of the UC Berkeley team, commercially successful or not, and to be equally impressed by how relevant (with the possible exception of LISP) the ideas in SPUR would become decades later."

I thought it was pretty widely accepted in the programming language community that Lisp has had a massive influence on the development of programming languages in general. I know it's not the only game in town, as it were, and that there's been lots of other interesting developments, but still. To imply that it hasn't been "relevant" seems like an uninformed comment to me.


The influence of Lisp is widely known among PL researchers and practitioners (such as compiler writers), but there are many people in computing who are unaware of PL research and who have little exposure to Lisp except for Emacs users and maybe a few exercises in Scheme during college. Thus, the contributions of Lisp are constantly rediscovered by those who just come across it. Over the decades mainstream programming languages have incorporated features from Lisp, and there are still Lisp features that haven’t made its way into mainstream programming languages.

Lisp is by no means the final word on programming languages, but its flexibility from its S-expression syntax to macros to its metaobject protocol makes it easier to bend the language to fit the problem rather than the usual approach of making the problem fit the implementation language, and this flexibility remains an enduring trait that continues to attract people.


Dynamic typing and garbage collection didn't really catch on


Gosh. Emblematic event: the journal Lisp and Functional Programming changed its name to drop Lisp, around early 90s iirc. So much of Lisp is super mainstream since before some programmers were born, now, that those aspects are no longer linked with it. And dynamic typing and GC are fucking ubiquitous compared to their status in the 80s.


I was annoyed at first but I think JonChesterfield was joking


It's an obvious joke from the perspective of javascript or literally true for rust enthusiasts. It's fascinating that various different branches of the language tree agree that lisp did some things right and completely disagree about which things those were.


Oh, thanks! Very thick of me.


I guess nobody is using Python or Java.


Yeah, and REPLs. Totally not a thing :-)

Remember when it was highly controversial for Java (and then C++) to get lambda expressions and many treated us as egg-headed academic nerds for wanting those things?

I sure do.


In java lamdas are just a syntactic sugar for anonymous inner classes, with the same resulting limitations, so one can either argue that java always had lamdas, but with weird verbose syntax, or that java does not really have lambdas (in which case C++ and Python also do not have lambdas, as the semantics are similar).


There are definitly not, they are implemented via invokedynamic.

"Implementing Lambda Expressions in Java with Brian Goetz"

https://www.youtube.com/watch?v=Uns1dm3Laq4

C++ lambdas follow the functor class model, and they also capture the environment, the only difference is that we get to say what we want captured.


The point of a language feature is to have support in the syntax and in the compiler for it. Going down that lane, you could say that C has always had object-oriented polymorphism, because you can manually insert pointers to tables of functions in your data structures, and call it just verbose syntax for the virtual methods featured in C++.


Or syntax actually matters. Which ironically is where the lisp people went wrong.


Syntax does matter. That's one area where the Lisp people went right.

Lisp syntax is nice to work with irrespective of everything else, which was quite a discovery, which came as a surprise. The Lisp project itself didn't expect it; Lisp was supposed to be programmed in M-expressions. Furthermore, there was a second generation project, Lisp 2, that provided an Algol-like syntax over top of the Lisp internals.

Because syntax matters, M-expressions and Algol syntax for Lisp fell by the wayside. Other subsequent attempts also faced very limited success.


> Lisp syntax is nice to work with irrespective of everything else

Disagree. Some people find it tolerable for the sake of lisp advantages (mainly macros). Very few find it outright preferable. The existence and popularity of reader macros is proof of this.

> Because syntax matters, M-expressions and Algol syntax for Lisp fell by the wayside.

Because they were bad syntax. And because syntax isn't the only thing that matters. A good syntax that didn't compromise the ease of writing macros would win out, if such a thing were possible.


In the computing industry as such, few people find it preferable, because few people know and work with Lisp.

In Lisp circles, I would say that the majority of the people find Lisp syntax preferable. Opinions similar "I'm only tolerating this to get to the macros" are hardly ever heard. The opinion, "I wish this non-Lisp language I have to work with were written in S-expressions" is often heard.

Lisp syntax is uniform, consistent, easily formatted in different levels of line breaking, easily manipulated by text editors.

There is no ambiguity due to associativity of precedence. You never wonder which expressions belong to which operator.

I got hooked on Lisp before Lisp macros became a meme; I liked working with it before learning about macros.

Lisp-syntax front ends for non-Lisp languages prove that there are communities of people who prefer that syntax. E.g. Hy or Hissp for Python, Fennel for Lua and such.

Lisps have a considerable amount of notation in addition to the parentheses. Not everything in the written source code is denoted by an open parenthesis and symbol. That's a strawman view of Lisp syntax. However, the notations are token notations that play along with the rest of the syntax.

Not all languages in the Lisp family or Lisp-likes have reader macros. Scheme doesn't have them, except the descendant Racket dialect which has the #lang thing. The Lisp-like functional language Clojure doesn't have reader macros, yet is quite popular.

In TXR Lisp, I intentionally didn't provide reader macros.

Reader macros are not heavily used in Common Lisp. Not all uses of reader macros in Common Lisp programs and libraries are for the purpose of deviating from the concepts of Lisp syntax.

Reader macros have disadvantages:

- the Lisp printer doesn't know about the syntax and doesn't use it.

- external code tooling doesn't know about reader macros: everything from syntax coloring to identifier cross-referencing and whatnot. That makes reader macros disruptive.

- reader macros can clash. (In the Common Lisp FOSS landscape, there is now a "named readtables" module for disciplined use of multiple custom syntaxes).


From

    print (x);
to

    (print x)
From

    if (cond) {
         do_a();
    } else {
        do_b();
    }
to

    (if cond 
        do_a
        do_b)
Really it is more a mindset than anything else.


What would those M-expressions have looked like?


I think Dylan is maybe the closest the PL world has come to a relatively "mainstream" Common-Lisp-Adjacent language, but with a "normal" infix syntax.


Mathematica.

Mathematica is the M-expression language. It's actually very expressive and has nice tricks like multimedia literals and the ability to do some fancy almost-tex rendering in expression, but deep down it's all sexps and lists and symbolic manipulation thereof (and an FFI).

(I think they tried to rebrand the language a couple of years back as "Wolfram", lol.)


Technically what Mathematica calls "lists" are one-dimensional arrays (aka vectors in Lisp), but not Lisp-like singly-linked lists.

This is an example of an M-Expression in the original definition of Lisp:

    [eq[third[(A B C (D . E))];C]→cons[D;cdr[((A 1 2 3) B C)]; T→car[x]]
which is roughly equivalent to the Lisp S-Expression

    (COND ((EQ (THIRD (QUOTE (A B C (D . E)))) (QUOTE C))
           (CONS (QUOTE D) (CDR (QUOTE ((A 1 2 3) B C)))))
          (T (CAR X)))
Which evaluates to

    (D B C)


Is the underlying implementation a necessary condition? Mathematica is a heap of shit, but it was deliberately based on lisp by sensible people and has astonishingly similar semantics. Afaik the language spec doesn't define anything incompatible with OG lisp and you can treat it like one without inconsistencies.


It was more influenced by computer algebra systems written in Lisp, like Macsyma (-> Maxima), REDUCE, and others.

> astonishingly similar semantics

The "Wolfram language" has at its core a rewrite rule systems. Expressions are being rewritten by applying transformations to it. Lisp does not use anything like that. Lisp has an evaluator mechanism, based on fixed evaluation rules (+ macro transformations, which are again Lisp functions).

As a result, code in the Wolfram language is difficult to (fully) compile. Good and extensive Lisp compilers exist since the early 60s. Current examples of complete compilers are SBCL for Lisp and Chez Scheme for Scheme.

The Wolfram language also has no formal spec (compare to something like Scheme) and the language itself is not open sourced, including its main implementation. It's basically defined by its main implementation, while its proprietary language documentation looks like written in a such way to prevent implementations of the language.


I guess I consider term-rewriting to be just a fancy lambda calculus. See, e.g. https://cstheory.stackexchange.com/questions/36090/how-is-la...

I can't argue with any of your points, but I'd like to mention that Mathematica's internal compiler is pretty capable and if you do something like Plot[f, xs] it will automatically try to compile f before evaluating it at all the points.


Lisp is not an implementation of "lambda calculus".

> internal compiler is pretty capable

Depends, when reading the documentation, one gets the impression that their compiler is very limited.


> Lisp is not an implementation of "lambda calculus".

Maybe you could explain that one for the benefit of myself and the other mortals.


I'm not who you replied to, however:

- The original Lisp was based on mutable singly linked lists. Lambda calculus has no lists, except for Church-encoded ones (just like lambda calculus only has Church encoded booleans and numbers). It also doesn't have mutability.

- Later, Common Lisp (which was a unification of the Lisp variants that had descended from Lisp 1.5) also grew an object system, and it was implemented using dynamic scoping and dynamic typing. That stuff definitely has nothing to do with lambda calculus.

If you want an implementation of the lambda calculus, you could try Haskell 98 or Standard ML. Those are based on System F [0], a kind of typed lambda calculus.

[0]: https://en.wikipedia.org/wiki/System_F


Though Lisp was originally implemented with lists that suported RPLACA and RPLACD functions for practical reasons, the interpretation of Lisp semantics in Lisp was specified (before Lisp was implemented) without using these functions.


> The original Lisp was based on mutable singly linked lists. Lambda calculus has no lists, except for Church-encoded ones (just like lambda calculus only has Church encoded booleans and numbers).

While Lambda calculus only has 1 argument functions, you can use those to encode lists [1] and numbers in many ways, including unary, binary, and ternary [2].

[1] https://en.wikipedia.org/wiki/Church_encoding#List_encodings

[2] https://bruijn.marvinborner.de/std/Number_Ternary.bruijn.htm...


Encodings built up within lambda calculus are not known to lambda calculus itself, and are ambiguous. There is no way to tell whether some encoding isn't just supposed to denote itself (the lambda calculus functional term that it is) rather than a number that it represents by convention.


> Encodings built up within lambda calculus are not known to lambda calculus itself, and are ambiguous.

Which is why it's strange for trealira to single out Church encoded booleans and numbers.


I wasn't trying to single them out; I was saying that lambda calculus has only lambda terms and applications, whereas Lisp always natively supported mutable linked lists from the beginning.

Coming back to that argument after a day, though, it admittedly seems like a weak argument; after all, Standard ML supports mutability and linked lists natively as well, and I gave that as an example of typed lambda calculus. Maybe a better argument is that it's dynamically typed, whereas I don't think there are dynamically typed formations of lambda calculus.


The original lambda calculus had no type system at all, it was just a term rewriting system of variables and lambda abstractions with two rules: (\lambda a.M)N --> M[N/a] (\lambda a.M a) --> M In this sense we can say it is "dynamically typed" in that it has a single type, the type of lambda terms, although this seems like it sells short the memory safety guarantees associated to types in modern dynamically typed languages.

This is the system that comes to mind for me when I think of "lambda calculus" because it is the one that was most important in the history of computability and logic, it can express the same computable functions as Turing machines. System F is not Turing complete.


You and the other mortals picked up the idea somewhere that Lisp is an implementation of lambda calculus. It was almost totally incorrect.

1. Lambda calculus has only function terms. Lisp has many types: symbols, strings, conses, vectors, characters, integers, floating-point numbers, ...

2. Lambda calculus has no list processing.

3. Lambda calculus has no quote operator to operate on pieces of its own syntax as data. There is no straightforward way to write a meta-circular interpreter for lambda calculus in lambda calculus. (There are papers about it if you want to see how hard this is.) Lisp evaluation defined in Lisp before it was even implemented, in a small number of definitions.

4. Lambda calculus has no functions with optional arguments, or variadic functions. Only functions of one argument, which is required (using currying to simulate more arguments).

5. Lambda calculus has no dynamic control transfers: throw/catch, restarts; no object system; no interactivity.

6. Lambda calculus has no symbols and no named entities: no global function environment. No dynamic/global variables. No mutable variables. No "goto" analogous to Common Lisp tagbody/go.


Here is a nice article about lambda calculus self-interpreters: http://anthonylorenhart.com/2021-09-04-Extremely-Simple-Self...

tromp is the local expert around here and may have something more enlightening to say.


See the following excerpt of a longer paper. Generally the lambda calculus is a specific mathematical formalism, LISP is quite a bit more and an actual programming language (it has more than just functions) and implementations don't follow the rules of lambda calculus for evaluation. We also need to see differences between formal models for some of Lisp and actual implementations of a "real" Lisp with numbers, etc. We don't need to model numbers with functions, like with "church numerals".

Here follows an excerpt from a paper "Some History of Functional Programming Languages" by D. A. Turner. It talks about LISP, as invent/discovered by John McCarthy:

https://www.cs.kent.ac.uk/people/staff/dat/tfp12/tfp12.pdf

    -----
Some Myths about LISP

Something called “Pure LISP” never existed — McCarthy (1978) records that LISP had assignment and goto before it had conditional expressions and recursion — it started as a version of FORTRAN I to which these latter were added. LISP 1.5 programmers made frequent use of setq which updates a variable and rplaca, rplacd which update the fields of a CONS cell.

LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions.

The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic).

If a function has a free variable, e.g y in

    f = λx.x + y
y should be bound to the value in scope for y where f is defined, not where f is called.

McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.

Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.

    -----
A remark from me:

"Today all versions of LISP are lambda calculus based.", except where they are not, like evaluation rules, dynamic binding, data types, etc.

What we have now in most Lisps since the mid 80s is lexical binding and closures, but not exclusively. Scheme earlier called dynamic bound variables "fluids". CL has it, for example by default for global variables.


There are some examples taken from the Lisp 1.5 manual on Wikipedia: https://en.wikipedia.org/wiki/M-expression


Yeah, I moved on from Java before lambdas were standardized, so can't comment much but what was painful was watching just how bloody long it took to get them into the language.


I miss those days of deep granular CS projects that strove to create the most efficient minimalist systems possible - seems the opposite of today's prolific jungles of libraries and linkers. Then again I consider Jonathan Blow a prophet in the desert of the real...


The emperor has no clothes.

Numpy 2.0 came out two days ago and it's chaos in the whole AI ecosystem. I'm not sure how much money we're wasting on that but I wouldn't be surprised if it's on the order of a billion dollars - suppose there are 50,000 people getting paid on the order of $1,000 per day each spending the two week dealing with fires over the next year: $500,000,000


Did something major break on 2.0?


The made breaking changes in the ABI... not like there weren't warnings. "Breaking changes to the NumPy ABI. As a result, binaries of packages that use the NumPy C API and were built against a NumPy 1.xx release will not work with NumPy 2.0. On import, such packages will see an ImportError with a message about binary incompatibility.

It is possible to build binaries against NumPy 2.0 that will work at runtime with both NumPy 2.0 and 1.x. See NumPy 2.0-specific advice for more details."

https://numpy.org/devdocs/dev/depending_on_numpy.html#numpy-...


> The made breaking changes in the ABI...

They bumped the major number. That's fair play. There has always been a lot of slouching wrt versions in python. That's not numpy's fault. Too bad they're getting the black eye for it. They could have avoided it by making a new dependency name ("numpy2"), but that sets a shameful precedent, so I give them credit for not copping out.


It's a tough situation. Once you have an installed base, your hands get tied. I miss the early days when I could rework the UI and UX of the product regularly. Now there is a lot of legwork to make sure we're not changing existing workflows (documented or undocumented). That said, I like that we're making money, so I can't complain too much.

I viscerally understand how companies can get stuck and unable to change their products significantly because their installed base will revolt. It's a tough situation.


It's a self inflicted situation by dependees. Tools in widespread use, JupyterLab for instance, has users tossing dependencies onto the pile, offering next to no guidance on dependency version control, as if expecting them to have to ponder such issues is unreasonable.

Well, if that's how you're going to behave, foregoing entirely obvious and solved engineering problems, you get to glue all the pieces back together when reality asserts itself. I don't imagine for one minute numpy folks didn't know what this would look like, and they did it anyway. Good on them. Breaking changes are necessary. That's unqualified. Necessary. If you can't afford a big blowup in your work then manage your work.


One of the reasons I made pip-chill was to have a tool to make it easy to me to have a minimal set of non-versioned requirements for canary builds on the latest-and-greatest versions. When the canary breaks, you realise you have work to do.


This is a symptom of cruft. Clean breaks are needed to keep software fresh.

The Tree of Elegance needs to be refreshed with the blood of both users and programers. Or something like that.


Cruft is also known as stability.

We'd not get very far is the Linux Kernel made breaking ABI changes every year.


They made sure you’d get an error rather than subtle and confusing bugs. They deserve plenty of kudos for how they decided to move forward.


It broke our builds. Because Numpy doesn't use semver there's no way to specify compatible versions constraints. One of our dependencies had a `numpy>=1.something` constraint but wasn't actually compatible with Numpy 2.

We need to use a lock file really, but `pip` doesn't support that - you need to use a better package installed like `uv` to get this standard feature.


Is there any modern CHERI-type approach to this? I don't know if the idea is a dead end or not, but I'd be very interested to see a modern processor that is made with something more symbolic and lispy than current x86/aarch64 designs which still feel to me like they're made for C.


Back in the 80's Intel tried to make a CPU designed for high level languages.

https://en.wikipedia.org/wiki/Intel_iAPX_432

It was a commercial failure.

The iAPX 432 programming model is a stack machine with no visible general-purpose registers. It supports object-oriented programming, garbage collection and multitasking as well as more conventional memory management directly in hardware and microcode. Direct support for various data structures is also intended to allow modern operating systems to be implemented using far less program code than for ordinary processors.


SPARC felt like it was made for C. x86 feels it was made for punishing developers.


Actually SPARC have four instructions that are directly meant for efficient implementation of Lisp/Smalltalk that came from SOAR and SPUR. Using that on top of some kind of Unix is shall we say problematic (in same way that x86 BOUNDS is mostly useless), together with few other “fast conditional trap” instructions in SPARC ISA, but it is there.


> Actually SPARC have four instructions that are directly meant for efficient implementation of Lisp/Smalltalk that came from SOAR and SPUR.

You are talking about the tagged add and subtract instructions, TADDcc/TSUBcc, and their trapping versions TADDccTV/TSUBccTV.

> Using that on top of some kind of Unix is shall we say problematic (in same way that x86 BOUNDS is mostly useless), together with few other “fast conditional trap” instructions in SPARC ISA, but it is there.

I've never tried using it, but why is it "problematic" on Unix? From what I understand, both on Solaris SPARC and Linux SPARC, the kernel translates the tag-overflow exception into a SIGEMT signal with si_code=EMT_TAGOVF, so you can catch the tag-overflow exception by installing a SIGEMT handler. On Linux SPARC, I think SIGEMT is only used for tag-overflow, whereas on Solaris it also is triggered by CPU performance counter overflow (EMT_CPCOVF)

I think TADDccTV/TSUBccTV are problematic in the sense that they are officially deprecated, and only supported for 32-bit overflow, not 64-bit overflow. The docs say to use BPVS instead (so branch on overflow flag instead of trapping an overflow exception)

All that said, this all has very fading relevance now, given how moribund SPARC is. Oracle has no plans to introduce any further SPARC CPUs, the SPARC CPUs they currently sell were released 7 years ago, and I expect they'll stop selling them sooner or later. Fujitsu has announced they'll end SPARC server sales in 2029, which is only 5 years away now, and although they were at one point talking about one last CPU after the current M12 generation, I doubt that's still happening.


The inefficiency is about going through kernel that will then dispatch some signal and the signal handler has to analyze what exactly happened, that is not a slow path, but ridiculously slow path.

OTOH, my view is somewhat LISP-centric and just implementing + by passing the arguments to taddcctv would be problematic, in the Smalltalk world, implementing SmallInteger>>#+ like that makes sense.


and c++ was made for punishing silicon a perfect match


Software engineers had their revenge. For now.


I fell through the rabbit hole on this one, and found this post with a delightful video from the The Computer Chronicles about RISC, circa 1986:

https://www.youtube.com/watch?v=DIccm7H3OA0

I'm so glad we have these sorts of things archived!



FWIW, that[0] links via abstracts to tech report pdfs[1][2][3].

Hmm, looks like https://www.softwarepreservation.org/projects hasn't been submitted to HN in some years.

[0] Parallel Lisps / SPUR Lisp https://www.softwarepreservation.org/projects/LISP/parallel#... [1] SPUR Lisp: Design and Implementation https://www2.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-373... [2] Features for Multiprocessing in SPUR Lisp http://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/CSD-88-406.... [3] Implementation of Multiprocessing SPUR Lisp http://www2.eecs.berkeley.edu/Pubs/TechRpts/1988/CSD-88-459....


Register Allocation in the SPUR Lisp Compiler: https://dl.acm.org/doi/pdf/10.1145/13310.13337

Design Decisions in SPUR: https://pages.cs.wisc.edu/~markhill/papers/computer86_spur.p...

SPUR: A VLSI Multiprocessor Workstation: https://www2.eecs.berkeley.edu/Pubs/TechRpts/1986/CSD-86-273...

Multiprocessing extensions in Spur Lisp: https://ieeexplore.ieee.org/document/31651


Opening up the topic: is there any work, nowadays, on making a CPU custom-made for one language, implementing in hardware some of its mechanisms? Or at least FPGA implementations?

Apart from CHERI extensions, and a few research papers on hardware-accelerated garbage collection (which I find super cool, and wonder why it isn't getting into actual production, given e.g. how stable Java GC is and how many huge companies use Java. Or maybe offloading it to an FPGA? The same way we have GPUs and TPUs for certain classes of computation?).


Greenarrays comes to mind, it's a Forth-oriented chip: https://news.ycombinator.com/item?id=23142322

The article doesn't really mention transputers, which were existent at the time and were remarkably similar in vision, with parallel multiprocessing, hardware network links, the Occam language, a ground-up Helios OS, and custom graphics card (Blossom, which would lead to the VGA standard).

Having a 3-element hardware stack somewhat restricts the use of languages on it - I imagine that Occam is similar to Forth in operation?


Occam looks like a concurrent Pascal, not like a Forth. The transputer had a 3-element hardware stack but it was used more like a cache for the workspace (kind of the memory stack). You can still find around the manual "Transputer Instruction Set - a compiler writer's guide" that explains how you would do that (section 5.3 Expression Evaluation).

Apart from Occam, there where C, C++ and Fortran compilers. Targeting the transputer is not more difficult than any other stack machine (like the JVM, the .Net CLR, CPython or Pascal p-code).

The weird/interesting thing about the transputer is that it is also an operating system: two task queues (high/low priority), preemptive scheduling and communication through channels (that can be one of the 4 serial ports or memory based).


Oh, thanks, I had forgotten about those crazy forthers (forthists? forthians?)!


Also,

Forthsider, from the Firth of Forth?


Goth -> Gothic

Therefore

Forth -> Forthic


North -> Norsemen

Forth -> Forsemen

(The less gendered "Fordlander" is also acceptable)


They're not mainstream, but IBM's mainframes since Z14 have something called the Guarded Storage Facility which provides hardware assisted Java Garbage Collection[0], and Oracle's M8 has some hardware acceleration for Java "Streams"[1].

[0] https://www.ibm.com/support/pages/pause-less-garbage-collect...

[1] https://www.oracle.com/a/ocom/docs/sparc-t8-m8-server-archit...


Azul was actually selling servers with hardware-accelerated GC and other interesting features for Java: https://www.azul.com/newsroom/azul-systems-to-unveil-industr...


Then they found a way how to (ab)use amd64 MMU to do more or less the same thing that they had custom CPU architecture for (ie. GC barriers in hardware).


A real shame. When they figured that out, their innovative hardware was doomed.


I heard (but don't know any details) that Apple M series processors have special instructions which are useful for objc_msgsend and other functions called frequently by Objective-C.


I remember someone mentioning an attempt to make all message passing in Smalltalk asynchronous running on separate threads, but I don't think anyone built a massively multithreaded CPU for that (besides, not that many messages are in flight at any given time, at least not on a desktop).


Could have meant RoarVM. https://github.com/smarr/RoarVM


There are projects for this, but right now Moore's law is still shuffling along well enough that by the time you'd get to market general processors will still be better, remembering that you need on the order of $1b to design a modern CPU.

Once it's truly dead pushing more features into silicon will the be only way we can get speedups and this will become a major research area again.


We could argue the same about highly parallel computations, yet discrete GPU cards exist. I am also old enough to remember the same thing attempted for physics in games, PhysX, which had its own cards (PPU) built by ASUS and a couple others.

There is also hardware acceleration for many audio and video codecs. "GC as a codec" doesn't strike me as something crazy: both are upgraded from time to time, but both are stable enough that hardware implementations are relevant over several years. Android phones would certainly benefit from it!


I think "do this matrix math and draw these polygons Really Really Fast" is specialized enough that GPUs are on the whole compelling as specialized hardware. That and this is a massive huge market which can benefit from economies of scale and Moore's law itself.

Accelerating a particular language runtime is a different story.

TBH people don't seem to have a problem generally with "throwing cycles away" and adopting languages like Python which have a pretty slow execution story. There's clearly not much of an economic advantage in optimization for processor throughput, otherwise more stuff would be getting [re]written in systems languages like C/C++/Rust/Zig etc, which would be a lot cheaper than developing custom hardware.

And frankly most of the stuff that gets funding and people seem to get excited about in our industry right now... doesn't need much CPU. It's mostly just glue languages waiting on I/O. Apart from the ML/LLM hype right now, which relies heavily on... GPUs.

I have a book here somewhere on the Linn Rekursiv hardware. Custom OO system in hardware in the 80s. Up there with Lisp machines as exotic and interesting "paths not taken".


Thanks, Rekursiv is very interesting, and the Wikipedia article gave me the keywords to dig further on the topic of "high-level language computer architecture"


This approach got a bad reputation by the late 1980s, thanks to some spectacular failures like the iAPX 432. The death of the Lisp Machine market wasn't spectacular in the same way, but they weren't a commercial success either, and contributed to the sentiment that tailoring the hardware for specific languages wasn't the way forward.

You'll still find people making the case that we have continued to specialize computer hardware for a specific language, that language being C. They kind of have a point. I see it as more symbiotic than that: C caught on in large part because its abstract machine was a good fit for real hardware†, and that real hardware is the way it is because it's a Pareto-optimal way to do computation.

Fact is that most languages don't have a semantics which could be accelerated much in hardware. Take Java for an example: it's possible to implement the JVM as a chip, but then you have a stack machine, and you can't JIT it onto a register architecture.

What we do now is make the chip as fast as we can at doing the basic things a computer needs to do, and only that (this is the essence of RISC). That offloads making programs fast to compilers, which can do a better job of it if the instructions they're working with are very basic, and have a (reasonably) predictable duration and behavior. Itanium was the last serious attempt to disprove that thesis, and also failed rather spectacularly. The Mill is the latest contender, and well, I wish them luck.

That may not be the final word though, people should keep trying the "language on a chip" approach, and some still are. I have a hunch that Erlang semantics might be a good target for hardware-specific acceleration, there should be some degrees of freedom available from knowing that data is only shared between processes via a strict ABI. And just because implementing garbage collectors in hardware didn't really pay off in the 1980s doesn't mean that it's physically impossible to have a win with that approach. I'm just sketching out why you don't see that kind of thing much these days.

† C has been described as a "portable assembly language" and that has become steadily less true. That would be a stronger reading of my statement than I intended.


If you told an alien visitor from Vega—or a computer architect with a time machine from 1980—that an Nvidia GPU was built to be a language on a chip, I wonder what they’d infer about the nature of that language?

I don’t think they’d describe CUDA. Something more APL-like, I’d imagine.

EDIT: Maybe … https://dl.acm.org/doi/pdf/10.1145/319838.319870


> "tailoring the hardware for specific languages"

"hardware" in quotes. A bunch of the machines had no/little language specific hardware. For example an Xerox Interlisp-D machine was the same hardware like the Smalltalk or Mesa system. The microcode was different. The microcode provided the instruction set and then the machine would boot either into the corresponding operating systems for Interlisp, Smalltalk or Mesa. https://en.wikipedia.org/wiki/Xerox_Star

Similar for the MIT CADR and some others, it was also a microprogrammed 32bit CPU.

Symbolics' CPU were also microprogrammed, but they added hardware features to it.

The SPUR (which is a RISC chip) mentioned is a more generic design, but with support for languages features for Lisp. There were other chips in the making at that time, like the Symbolics Sunstone CPU, which was also a RISC design for Lisp, but which also did not reach the market.

> Take Java for an example: it's possible to implement the JVM as a chip, but then you have a stack machine, and you can't JIT it onto a register architecture.

One could do that, but it would be a more complex chip.

For Lisp CPUS "fast" for benchmarks was also not that much a goal. Goals oftenwere "fast" execution of a Lisp operating system (written in a dynamically typed and garbage collected language), support for large address spaces, support for Lisp data types&data representation, generic operations (like generic arithmetic operations) and compact machine code.


There is JOP [1] for Java.

[1] https://www.jopdesign.com/


I don't think the 'cdr' instruction would have created a new list.


Correct.

  (let* ((x '(a b c))
         (y (cdr x))
         (z (copy-list (cdr x))))
    (format t "y=~A, z=~A~%" y z)
    (setf (nth 1 x) 'w)
    (format t "y=~A, z=~A~%" y z))
=>

y=(B C), z=(B C)

y=(W C), z=(B C)


Just so you know, modifying quoted lists in Lisp is not dissimilar to setting string literals in C: it's undefined behavior, i.e., it's not guaranteed to work, or be portable if it does work, and it may or may not make your program behave strangely if it does work.

To make a mutable list, you could write (list a b c).


Yes, updating literals is asking for trouble, or at least can lead to some confusion.

If I replace ‘(a b c) in my example with (list ‘a ‘b ‘c) I got the same results.


> If I replace ‘(a b c) in my example with (list ‘a ‘b ‘c) I got the same results.

Yeah, your core point was right, just thought to let you know about that bit of undefined behavior.


I think you made a worthwhile point.

I recall seeing either on IRC or reddit someone asking about why their code wasn't working as expected, and it lead back to needing to understand that the string literal in their code was getting updated - so likely similar to the UB in C that you mentioned. Until seeing that post, I wasn't really aware of that potential issue with literals, so I think it's good to point these things out. Recently, I saw it was mentioned in "Successful Lisp"; it might be mentioned in other books I've read, but maybe I just didn't pick up on it.


FTA: “SPUR was ahead of its time in building a multiprocessor system in the mid-1980s. IBM’s POWER4 processor from 2001 was the first multicore microprocessor, with Intel and AMD each following four years later.”

That’s comparing apples with oranges. The POWER4 had two cores on a single die (https://en.wikipedia.org/wiki/POWER4) while in this system (FTA) “a processor would consist of three custom VLSI designs and around two hundred other chips”

Multiprocessing systems are much older, for example C.mmp (https://dl.acm.org/doi/10.1145/1480083.1480098, https://en.wikipedia.org/wiki/C.mmp) from 1971 (possibly also a bit of apples and oranges, but if so, IMO less so than in this article)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: