It's not specifically stated, but the assumption seems to be that we should sometimes make our code look closer to how we display math in papers or on a whiteboard.
I'm legitimately on the fence about this.
I recently re-watched Guy Steele's 2017 PPOPP on "Computer Science Metanotation", and aside from wanting to make CSM an unambiguous formal system, he specifically says at one point that he wants tools to support CSM as it appears (i.e. with stacked overbars, gentzen-style inference rules, etc) because "anything else is a translation".
And partly I get that. There is real cognitive work if you have to constantly translate back and forth between two representations of the "same" thing.
But should we favor readability or ease of interaction / modification? Keyboards give you a way to insert a sequence of characters. Notations that are not graphically linear (e.g. a symbol that has both a subscript and a superscript) create an ambiguity about how you input them. "Modes" where we display something different than what is typed can create ambiguity about how to edit them.
And if a tool only covers 90% of the notational convention you care about, it quickly gets frustrating as you repeatedly bump up against that boundary. I experience this in emacs org mode with "symbol" support.
You input characters using the same notation as latex (IE \mu or \hbar) and then tab autocomplete to unicode. Even jupyter notebooks support that convention. Most people in Julia's target audience know latex already and so there is zero learning curve.
It did hugely simplify my scientific code where I already had variable names like hnu, omega_squared, and k_prime_prime which become arduous when checking against long equations.
> Most people in Julia's target audience know latex already
Is Julia's target audience really that small? Either that or you'd be surprised how many people don't use LaTeX but do write scientific code and papers.
Even if you don't use Latex, all the equation editors in e.g. word etc use more or less the same latex symbol names Julia does (I remember typing things in by name in MS-Word 2007 long before I started latex). Even if they don't use more latex than that.
And if the people don't use enough symbols in their papers to memorise the type-able names, they probably don't want to use them in their code in the first place.
I don't know Latex and find it difficult to learn, but have found that it's almost trivial to remember the Latex names for these symbols, especially since it's often a small subset of these that you use, depending on your field. They've been pretty quick to get into my muscle memory.
Just like with natural languages unrelated to yours, it's the grammar that really does your head in, needing just the vocabulary is Easy mode.
For physics, I think that's true. For other fields in general, it probably varies a lot (I'd love to see some numbers). We live in bubbles but we should try to remember that our bubbles don't represent the whole world.
Writing in the non-Julia world, my standard keyboard input is set to Unicode character interpretation so I can enter math symbols (mostly predicate logic and Z notation). I can believe it. Just enough to translate the algorithm of interest is enough for it to make sense.
IMO the people whose scientific papers are equation-heavy will have someone else do the editing, or the publisher will have a template. Latex could use a disruption, but who will do it...?
k'' is better than k_prime_prime, sure. But there's still lots of notation that isn't just a sequence of characters. And Latex-style representations are restricted to sequences of characters. A big formula in Latex code is hard to read compared to the rendered version. LyX would be a nicer representation.
no, the comment is the included REPL has it,
and that most editor plug-ins (of which they exist for vim, emacs, sublime, Atom, VS-Code) intended for julia, add the same unicode shortcuts, as well as the usual syntax highlighting, keyword completion etc.
If you don't want to use the, but do want to enter unicode, then you can do it the normal way.
But I feel like the fact that you want to enter unicode but don't want to install a plugin to make it easier is pretty weird.
I guess it could come up if editting someone elses code.
As a general rule most libraries (including the standard libary) make very limitted use of unicode in their APIs, so you don't have to use unicode to work with the library.
I would go so far as to say if a library requires you to use unicode you should open an issue, and get them to add a ascii alias for that function. (julia standard library had just such an issue opened a few versions ago about the compose operator ∘ and not we have a `compose` function to match).
I would also say internally most packages make very limitted use of unicode also.
Even with the plugins it is still a bit annoying to type.
Plus often it would be a less meaningful variable name.
E.g. why say: θ, when you could be saying `departing_assent_angle` or something else that conveys context specific meaning.
Its nice to have the option for when it is clear, but I think pleasingly people only use it in moderation.
Emacs is actually why this is attractive to me. The latex input mode makes it easy to input a large set of Unicode that I care about, but my font not supporting them means that the fallback is used, which throws everything out of alignment, frequently messing up the line-height as well.
On the other hand, if a tool only covers 10% of the notation you care about because of a restriction like plain-text, it’s very difficult to document what you’re doing as you bump up against this boundary.
Humans have been using nonlinear notation in a wide variety of fields, but computer programming languages seem especially stuck on plain text everywhere, despite the utility of such notation. I would rather faff around for a minute or two trying to figure out how to change my integration limits in a comment than leave some horrendous documentation comment like “computes the integral from a to b of the blah blah blah blah” and by this point because you can’t see the familiar notation, you first have to interpret it before checking the function. (Several PhD students I know working in applied mathematics and physics will “translate” these comments from code onto a scrap piece of paper in front of them, before attempting to dig into the function).
Personally I think it would be great if I could embed more rich text (styling, equations, and images) in comments to better communicate my intent. This stuff otherwise just winds up being left out (because it’s too hard), or in some other separate documentation which is difficult to keep in sync during development.
I am sincerely of the opposite opinion. Code that I write on a whiteboard or in a paper or in a textbook is much easier to read than ASCII code.
The over bar example you give is a bad notation independently whether it is code or not. But Unicode indices and superscripts are amazingly useful. And .dot and .kron are simply terrible notation compared to the standard Unicode operators for the same operations.
This opinion most likely depends of your field and day job.
When I was at the university, most of the software written here was single purpose, rarely more than a couple thousand lines - you've got the time to actually think about every step of the few algorithms that'll be in there.
My last freelance gig was about delving and fixing performance bugs in a multi-million-line codebase that I'd never seen before, in a few days' time. There is absolutely no time to stop and think about ⊗ vs ⊛ in that case.
Note that this is the main use case of Julia code: relatively short academic code, where more thought goes into coming up with the equations than with the code itself. For me it is not uncommon to spend a week doing the full derivativation of a formula, which is basically a one liner in code.
Scala made this available, it was a mess, so this has been to my knowledge largely ignored by the community. Early Scala projects were rife with this sort of impossible to read or maintain code.
It's just a very nice coding font.
Tons of OpenType features like alternatives etc.
But it's doesn't go to excess with ligatures.
It has huge unicode coverage.
The letter shapes are nice.
It's just good.
The asterisk is far too heavy for anyone working in a language that makes heavy use of it.
I normally use Inconsolata at "11 point" (quotes because this is a nominal value, not actual) size; JuliaMono is larger when used at the same nominal size, but going down to 10pt results in a level of graininess that I don't find acceptible.
Great effort, but I'll stick with Inconsolate for now.
This font is beautiful and exactly what I was looking for.
I’ve used Courier as my coding and console font for a long time, as it had the right character design and stroke width choices to make reading really easy, even at small font sizes.
A lot of modern coding fonts are taller and thinner. I’ve tried Fira Code, Source Code Pro, Consolas, and others each for a week or so, and found myself struggling to read identifiers and punctuation as well as before.
This font however hits all the same points and is instantly readable for me. Thanks for sharing!
Courier was the default typeface of IBM typewriters for decades, monospaced mechanically just like everyone else.
No difficulty producing columnal or tabular data on the first attempt was forseen.
When FORTRAN programming came along, you wanted the characters to line up in columns properly too, expected with the simple dot-matrix fonts.
Daisywheel printers all had mono Courier, with some starting to also autodetect mechanical PS printwheels when inserted.
Dot-matrix printers got better fonts than DOS itself but what you see on the screen was not very much of what you got on the printout the first time. PS could increase woe. Courier was available both ways. But tons of people stuck with the simple characters like you get on the cheap store receipts.
DOS text files using a monospaced font can be reselected to a different monospaced font without all the formatting difficulty you can get otherwise.
When Windows got popular, loads of people found out their old dot-matrix printer had been capable of beautiful PS output the whole time, it had just been out of reach without a WYSIWYG drafting approach.
You can still set your email window for a monospaced font, and draft a message into a Notepad window set for mono, intentionally with all the sentences of each paragraph on a single line when Word Wrap is disabled (disable Word Wrap before copying & pasting), and with a single line space between these _paragraphs_.
Then just copy & paste the whole text linewise from Notepad to different email windows and their mono spacing can be a good way to make the different local & remote word wraps work with less headaches.
Courier's remaining universalness still makes it a top choice for this type of thing, but a new monospace font that can serve as a functional superset is good to have.
It looks fairly clean but the lowercase "r" character stuck out at me as I read the webpage. Mainly in the smaller paragraph size, not so much the titles. Something about it just looks "off" to me, sticks out like a sore thumb as I'm reading.
It's got a left-pointing serif like the i and j. It should just be shortened version of n. He provides alternate versions of some characters like g and a, but I don't see an alternate for r.
I would point to Cousine[1] as a nice mono font with a similar set of goals. It is not clear to me if Cousine has an equal Unicode span, but it claims a "pan-european" character set.
I worked on a Haskell code base that had Unicode enabled. It rendered GitLab’s search useless, and vim/emacs had different key combos to enter them. We eventually worked them out of our code due to the hassle.
The Bulgarian example in the language section tells a story of a security guard in a warehouse who is pretending to be on post but is in fact secretly eating meatballs behind some crates.
I’ve been learning Thai and Lao and it’s a bummer that it has virtually no monospace support. Abugida support is certainly going to be a bit of a challenge but the number of characters is definitely finite, but how does every character take up one ‘space’ when อี้ is three characters? It’s very easy to visually stack them as a human into one ‘space’, but trickier as a machine.
Well, I don't know about your nationality, but as a Korean person I found that the lack of CJK characters in a coding font is usually a non-problem, mostly because there are almost none that have them. (Apart from some local-company made fonts, Noto CJK Mono is almost the only one... which I do not like.)
Usually you just set a fallback CJK font and just not use my native language while coding - I just got better at English while writing comments in it.
This font could also be a good fallback to your preferred monospace font. My OS fallback to a non-monospace language which sometimes gets really hairy to work with on some characters.
There is no latex in the code. You _may_ use known latex commands to produce certain symbols, which then _become_ the code. But that's just a convenience julia provides. If you prefer to copy-paste from your character map application, or memorize their meta-key shortcuts, that's entirely up to you.
I both agree and disagree at the same time. And as you can see from the other comments here, it's not a clear cut issue. For me _some_ math functionality would be welcome, and some would not. But equally, if a feature is present, I fully expect it to be abused and make things worse rather than better, as with any feature. So, e.g., I'd much rather have code like Γ(α, β) than Gamma(alpha, beta), and N(μ, σ²) over normpdf( mu, sigmasquared ), but I would not like it if people started choosing obscure single-letter names instead of self-documenting code that "reads like English". But arguably that is a question of promoting good style and programming standards, not one of "limiting the freedom of users by design because some may not follow good style".
It just permits a larger set of glyphs (Unicode) for identifiers and operators than some other languages, which only permit ASCII. It makes it easier to read the code for everyone and it is a great design decision.
It's not "just" permitting more characters, it overlays one set of syntax (Julia) with another (mathematical notation). And mathematical notation has no unified standard.
> It makes it easier to read the code for everyone
It doesn't make it easier for some people including myself so I can certainly say it doesn't make it easier for everyone.
I'm legitimately on the fence about this.
I recently re-watched Guy Steele's 2017 PPOPP on "Computer Science Metanotation", and aside from wanting to make CSM an unambiguous formal system, he specifically says at one point that he wants tools to support CSM as it appears (i.e. with stacked overbars, gentzen-style inference rules, etc) because "anything else is a translation".
And partly I get that. There is real cognitive work if you have to constantly translate back and forth between two representations of the "same" thing.
But should we favor readability or ease of interaction / modification? Keyboards give you a way to insert a sequence of characters. Notations that are not graphically linear (e.g. a symbol that has both a subscript and a superscript) create an ambiguity about how you input them. "Modes" where we display something different than what is typed can create ambiguity about how to edit them.
And if a tool only covers 90% of the notational convention you care about, it quickly gets frustrating as you repeatedly bump up against that boundary. I experience this in emacs org mode with "symbol" support.