Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Or syntax actually matters. Which ironically is where the lisp people went wrong.


Syntax does matter. That's one area where the Lisp people went right.

Lisp syntax is nice to work with irrespective of everything else, which was quite a discovery, which came as a surprise. The Lisp project itself didn't expect it; Lisp was supposed to be programmed in M-expressions. Furthermore, there was a second generation project, Lisp 2, that provided an Algol-like syntax over top of the Lisp internals.

Because syntax matters, M-expressions and Algol syntax for Lisp fell by the wayside. Other subsequent attempts also faced very limited success.


> Lisp syntax is nice to work with irrespective of everything else

Disagree. Some people find it tolerable for the sake of lisp advantages (mainly macros). Very few find it outright preferable. The existence and popularity of reader macros is proof of this.

> Because syntax matters, M-expressions and Algol syntax for Lisp fell by the wayside.

Because they were bad syntax. And because syntax isn't the only thing that matters. A good syntax that didn't compromise the ease of writing macros would win out, if such a thing were possible.


In the computing industry as such, few people find it preferable, because few people know and work with Lisp.

In Lisp circles, I would say that the majority of the people find Lisp syntax preferable. Opinions similar "I'm only tolerating this to get to the macros" are hardly ever heard. The opinion, "I wish this non-Lisp language I have to work with were written in S-expressions" is often heard.

Lisp syntax is uniform, consistent, easily formatted in different levels of line breaking, easily manipulated by text editors.

There is no ambiguity due to associativity of precedence. You never wonder which expressions belong to which operator.

I got hooked on Lisp before Lisp macros became a meme; I liked working with it before learning about macros.

Lisp-syntax front ends for non-Lisp languages prove that there are communities of people who prefer that syntax. E.g. Hy or Hissp for Python, Fennel for Lua and such.

Lisps have a considerable amount of notation in addition to the parentheses. Not everything in the written source code is denoted by an open parenthesis and symbol. That's a strawman view of Lisp syntax. However, the notations are token notations that play along with the rest of the syntax.

Not all languages in the Lisp family or Lisp-likes have reader macros. Scheme doesn't have them, except the descendant Racket dialect which has the #lang thing. The Lisp-like functional language Clojure doesn't have reader macros, yet is quite popular.

In TXR Lisp, I intentionally didn't provide reader macros.

Reader macros are not heavily used in Common Lisp. Not all uses of reader macros in Common Lisp programs and libraries are for the purpose of deviating from the concepts of Lisp syntax.

Reader macros have disadvantages:

- the Lisp printer doesn't know about the syntax and doesn't use it.

- external code tooling doesn't know about reader macros: everything from syntax coloring to identifier cross-referencing and whatnot. That makes reader macros disruptive.

- reader macros can clash. (In the Common Lisp FOSS landscape, there is now a "named readtables" module for disciplined use of multiple custom syntaxes).


From

    print (x);
to

    (print x)
From

    if (cond) {
         do_a();
    } else {
        do_b();
    }
to

    (if cond 
        do_a
        do_b)
Really it is more a mindset than anything else.


What would those M-expressions have looked like?


I think Dylan is maybe the closest the PL world has come to a relatively "mainstream" Common-Lisp-Adjacent language, but with a "normal" infix syntax.


Mathematica.

Mathematica is the M-expression language. It's actually very expressive and has nice tricks like multimedia literals and the ability to do some fancy almost-tex rendering in expression, but deep down it's all sexps and lists and symbolic manipulation thereof (and an FFI).

(I think they tried to rebrand the language a couple of years back as "Wolfram", lol.)


Technically what Mathematica calls "lists" are one-dimensional arrays (aka vectors in Lisp), but not Lisp-like singly-linked lists.

This is an example of an M-Expression in the original definition of Lisp:

    [eq[third[(A B C (D . E))];C]→cons[D;cdr[((A 1 2 3) B C)]; T→car[x]]
which is roughly equivalent to the Lisp S-Expression

    (COND ((EQ (THIRD (QUOTE (A B C (D . E)))) (QUOTE C))
           (CONS (QUOTE D) (CDR (QUOTE ((A 1 2 3) B C)))))
          (T (CAR X)))
Which evaluates to

    (D B C)


Is the underlying implementation a necessary condition? Mathematica is a heap of shit, but it was deliberately based on lisp by sensible people and has astonishingly similar semantics. Afaik the language spec doesn't define anything incompatible with OG lisp and you can treat it like one without inconsistencies.


It was more influenced by computer algebra systems written in Lisp, like Macsyma (-> Maxima), REDUCE, and others.

> astonishingly similar semantics

The "Wolfram language" has at its core a rewrite rule systems. Expressions are being rewritten by applying transformations to it. Lisp does not use anything like that. Lisp has an evaluator mechanism, based on fixed evaluation rules (+ macro transformations, which are again Lisp functions).

As a result, code in the Wolfram language is difficult to (fully) compile. Good and extensive Lisp compilers exist since the early 60s. Current examples of complete compilers are SBCL for Lisp and Chez Scheme for Scheme.

The Wolfram language also has no formal spec (compare to something like Scheme) and the language itself is not open sourced, including its main implementation. It's basically defined by its main implementation, while its proprietary language documentation looks like written in a such way to prevent implementations of the language.


I guess I consider term-rewriting to be just a fancy lambda calculus. See, e.g. https://cstheory.stackexchange.com/questions/36090/how-is-la...

I can't argue with any of your points, but I'd like to mention that Mathematica's internal compiler is pretty capable and if you do something like Plot[f, xs] it will automatically try to compile f before evaluating it at all the points.


Lisp is not an implementation of "lambda calculus".

> internal compiler is pretty capable

Depends, when reading the documentation, one gets the impression that their compiler is very limited.


> Lisp is not an implementation of "lambda calculus".

Maybe you could explain that one for the benefit of myself and the other mortals.


I'm not who you replied to, however:

- The original Lisp was based on mutable singly linked lists. Lambda calculus has no lists, except for Church-encoded ones (just like lambda calculus only has Church encoded booleans and numbers). It also doesn't have mutability.

- Later, Common Lisp (which was a unification of the Lisp variants that had descended from Lisp 1.5) also grew an object system, and it was implemented using dynamic scoping and dynamic typing. That stuff definitely has nothing to do with lambda calculus.

If you want an implementation of the lambda calculus, you could try Haskell 98 or Standard ML. Those are based on System F [0], a kind of typed lambda calculus.

[0]: https://en.wikipedia.org/wiki/System_F


Though Lisp was originally implemented with lists that suported RPLACA and RPLACD functions for practical reasons, the interpretation of Lisp semantics in Lisp was specified (before Lisp was implemented) without using these functions.


> The original Lisp was based on mutable singly linked lists. Lambda calculus has no lists, except for Church-encoded ones (just like lambda calculus only has Church encoded booleans and numbers).

While Lambda calculus only has 1 argument functions, you can use those to encode lists [1] and numbers in many ways, including unary, binary, and ternary [2].

[1] https://en.wikipedia.org/wiki/Church_encoding#List_encodings

[2] https://bruijn.marvinborner.de/std/Number_Ternary.bruijn.htm...


Encodings built up within lambda calculus are not known to lambda calculus itself, and are ambiguous. There is no way to tell whether some encoding isn't just supposed to denote itself (the lambda calculus functional term that it is) rather than a number that it represents by convention.


> Encodings built up within lambda calculus are not known to lambda calculus itself, and are ambiguous.

Which is why it's strange for trealira to single out Church encoded booleans and numbers.


I wasn't trying to single them out; I was saying that lambda calculus has only lambda terms and applications, whereas Lisp always natively supported mutable linked lists from the beginning.

Coming back to that argument after a day, though, it admittedly seems like a weak argument; after all, Standard ML supports mutability and linked lists natively as well, and I gave that as an example of typed lambda calculus. Maybe a better argument is that it's dynamically typed, whereas I don't think there are dynamically typed formations of lambda calculus.


The original lambda calculus had no type system at all, it was just a term rewriting system of variables and lambda abstractions with two rules: (\lambda a.M)N --> M[N/a] (\lambda a.M a) --> M In this sense we can say it is "dynamically typed" in that it has a single type, the type of lambda terms, although this seems like it sells short the memory safety guarantees associated to types in modern dynamically typed languages.

This is the system that comes to mind for me when I think of "lambda calculus" because it is the one that was most important in the history of computability and logic, it can express the same computable functions as Turing machines. System F is not Turing complete.


You and the other mortals picked up the idea somewhere that Lisp is an implementation of lambda calculus. It was almost totally incorrect.

1. Lambda calculus has only function terms. Lisp has many types: symbols, strings, conses, vectors, characters, integers, floating-point numbers, ...

2. Lambda calculus has no list processing.

3. Lambda calculus has no quote operator to operate on pieces of its own syntax as data. There is no straightforward way to write a meta-circular interpreter for lambda calculus in lambda calculus. (There are papers about it if you want to see how hard this is.) Lisp evaluation defined in Lisp before it was even implemented, in a small number of definitions.

4. Lambda calculus has no functions with optional arguments, or variadic functions. Only functions of one argument, which is required (using currying to simulate more arguments).

5. Lambda calculus has no dynamic control transfers: throw/catch, restarts; no object system; no interactivity.

6. Lambda calculus has no symbols and no named entities: no global function environment. No dynamic/global variables. No mutable variables. No "goto" analogous to Common Lisp tagbody/go.


Here is a nice article about lambda calculus self-interpreters: http://anthonylorenhart.com/2021-09-04-Extremely-Simple-Self...

tromp is the local expert around here and may have something more enlightening to say.


See the following excerpt of a longer paper. Generally the lambda calculus is a specific mathematical formalism, LISP is quite a bit more and an actual programming language (it has more than just functions) and implementations don't follow the rules of lambda calculus for evaluation. We also need to see differences between formal models for some of Lisp and actual implementations of a "real" Lisp with numbers, etc. We don't need to model numbers with functions, like with "church numerals".

Here follows an excerpt from a paper "Some History of Functional Programming Languages" by D. A. Turner. It talks about LISP, as invent/discovered by John McCarthy:

https://www.cs.kent.ac.uk/people/staff/dat/tfp12/tfp12.pdf

    -----
Some Myths about LISP

Something called “Pure LISP” never existed — McCarthy (1978) records that LISP had assignment and goto before it had conditional expressions and recursion — it started as a version of FORTRAN I to which these latter were added. LISP 1.5 programmers made frequent use of setq which updates a variable and rplaca, rplacd which update the fields of a CONS cell.

LISP was not based on the lambda calculus, despite using the word “LAMBDA” to denote functions. At the time he invented LISP, McCarthy was aware of (Church 1941) but had not studied it. The theoretical model behind LISP was Kleene’s theory of first order recursive functions.

The M-language was first order, as already noted, but you could pass a function as a parameter by quotation, i.e. as the S-expression which encodes it. Unfortunately, this gives the wrong binding rules for free variables (dynamic instead of lexicographic).

If a function has a free variable, e.g y in

    f = λx.x + y
y should be bound to the value in scope for y where f is defined, not where f is called.

McCarthy (1978) reports that this problem (wrong binding for free variables) showed up very early in a program of James Slagle. At first McCarthy assumed it was a bug and expected it to be fixed, but it actually springs from something fundamental — that meta-programming is not the same as higher order programming. Various devices were invented to get round this FUNARG problem, as it became known.

Not until SCHEME (Sussman 1975) did versions of LISP with default static binding appear. Today all versions of LISP are lambda calculus based.

    -----
A remark from me:

"Today all versions of LISP are lambda calculus based.", except where they are not, like evaluation rules, dynamic binding, data types, etc.

What we have now in most Lisps since the mid 80s is lexical binding and closures, but not exclusively. Scheme earlier called dynamic bound variables "fluids". CL has it, for example by default for global variables.


There are some examples taken from the Lisp 1.5 manual on Wikipedia: https://en.wikipedia.org/wiki/M-expression




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: