I remember the buzz in Germany's math graduate community, when word of Scholze's perfectoid spaces began to go around. Reading groups spawned everywhere trying to get a grasp on the technicalities:
Work in number theory very often deals only with one of two fundamentally different settings.
* Either objects where multiplication with any natural number can be inverted ('characteristic 0', examples for such objects are the rational or complex numbers or "something inbetween the two"),
* or objects where a certain prime number p has a special role; namely, the multiplication by p is the 0-map. This sounds horrible, but it actually has a great implication: (a+b)^p = a^p + b^p, because the middle binomial coefficients are multiples of p. This makes x -> x^p a multiplicative and additive (!) map, the FROBENIUS.
Scholze introduced a way to pull the Frobenius map over to characteristic 0. He could do this 'tilting' in towers and in this way compared the theory of towers in characteristic 0 and p. For details, see his famed answer here [1].
Very soon it became clear, that this tool had remarkable applications and his thesis explored only one of them: a proof of the monodromy-weight conjecture in characteristic 0 by tilting results in characteristic p.
This result alone made the characteristic 0 neck hair stand up :-)
Are Scholzes lectures online? They are described as intuitive and accessible to undergraduates?
While I'm a little skeptical[0] of the tone of this article, I hold out hope this just is the Feynman (is able to communicate complex ideas well) of Math.
Sometimes, I'd like time with someone with advanced knowledge of such things, and be able to shoot my naive questions at them. I'm sometimes shocked by how subjects like this are not just non-intuitive, but outright obs by curious omissions[1] by experts in their explanations.
For example, look on HN/mathOverflow, and ask for an intuitive explanation of something, and you'll often get "there is no such thing, you just have to work through it - math is as simple as it can be". Then, someone eventually provides an explanation that is more intuitive than the norm, proving that it was failure to communicate in simpler terms in the first place.
this article:
"Unlike many mathematicians, he often starts not with a particular problem he wants to solve, but with some elusive concept that he wants to understand for its own sake"
"[Scholze] would never lose himself in the jungle, because he’s never trying to fight the jungle. He’s always looking for the overview, for some kind of clear concept."
I can't find them now, but I'm sure I've read sentiments from mathematicians such as "don't try to get an 'intuition', there is no such thing in advanced math where there are few mappings to real life things, and such metaphors will only create misunderstandings" and "we'd all like a 'map' of mathematics, but math is not neatly ordered like a landscape, it's not possible to even visualise all the links between math fields".
So this guys method is exactly what other mathematicians warn against? Trying to understand?
[0] - He is depicted so far ahead that mathematicians look forward to him entering their field? Even if true, I'm skeptical most would admit something like that - also anything that appears in wired will have a certain minimum of hot-air..
[1] - e.g. the norm distribution id the convergence of the binomial distribution (which can in turn, be constructed), which is why it turns up so often whenever a large number or random variables are involved. I've yet to be taught stats where this is explained, before I realised it was as if norm was just something that appeared by magic.
> So this guys method is exactly what other mathematicians warn against? Trying to understand?
No, I don't think so.
The "there's no intuition" advice is really good advice for undergraduates or lay people first encountering abstract concepts in pure mathematics. In fact there is an intuition, but the intuition is usually sort of on its own terms -- not so much grounded in physical life experiences. In other words, you will get an intuitive grasp on the ideas but that intuition will probably not be an analogy to your previous experiences in everyday life. When mathematicians say "there's no intuition" they usually mean "there's no physical experience that corresponds to the underlying intuition, and it's not really like anything you've seen before. It's a new experience and you need to experience it on its own terms. Stop trying to ground yourself in classical mechanics or whatever."
But that's quite the mouthful, so better to just say "there's no intuition; just focus on working with the objects and you'll eventually get comfortable".
This quote from the article captures it well, I think:
> "Now I find real numbers much, much more confusing than p-adic numbers. I’ve gotten so used to them that now real numbers feel very strange."
If you're talking about Scholze's lectures about his cutting-edge work, there is not a chance they are accessible to undergraduates; when I attended one as a fourth-year PhD student specializing in algebraic number theory, I would say I only kind of got the gist of it.
Re: your skepticism... The guy is a once-in-a-generation talent; his constructions were able to vastly simplify multiple very long, very complicated proofs that groups of the top people in this field were working on. This is in a field (algebraic number theory) which is considered one of the more saturated and technically difficult within all of mathematics (admittedly, I am likely biased on this point). That being said, all of his work so far has been in the ballpark of Langlands/p-adic/arithmetic geometry, so I would be surprised if he achieved significant results that strayed too far from this stuff.
I'm not sure what you mean about Feynman; Peter's genius is not so much his ability to communicate complex ideas in a simple way, but rather he was able to come up with constructions (or if you like, abstractions) which compartmentalize the complex ideas in the right way so that they are easier to deal with. To make an analogy with computing, think of the concept of a "thread". Without the concept of a thread, you'd have to do so much manual maintenance that you could never dream of building say Google. Scholze's perfectoid spaces are analogous; their definition would have been understood by mathematicians 50 years ago, but no one really got that this was the right thing to consider.
Yep, it does. If you are ready to give one month of your life to understand "perfectoid spaces", that might be a wise choice, you can only really tell once you've done it, because there doesn't seem to be an upfront way of knowing what you are going to get out of it. On the other hand, I am currently investing two weeks into learning "geometric algebra" because I found a book that tells me in its introduction why it's worth for me to learn it.
Exactly - if you want to make choices based off what's economically the best (studying CS), that's fine. But there's so much beauty in the world that is "impractical" to reach for (like studying music, art, seeing nature, etc)
> “To this day, that’s to a large extent how I learn,” he said. “I never really learned the basic things like linear algebra, actually — I only assimilated it through learning some other stuff.”
This was interesting to me. My math profs always admonished us to ensure foundations are completely watertight before advancing to the next thing in tiny increments. I've absorbed this to the point where I perhaps get stuck filling in inconsequential gaps at roughly the same level, kitting out base camp as fully as possible but postponing the ascent.
I have no dreams of becoming a professional mathematician, but maybe I'd have quicker insights and more creative ideas if I tried this approach too, tackling something impossible and working backwards.
The thing about this approach--where you learn by assimilation rather than structured study--is that it you need to have amazing intuition for it to work. One of the benefits of structured study is gradually building intuition of the definitions and theorems. For someone of Scholze's caliber, the intuition is already there before any study. Structured study of linear algebra probably wouldn't have done much for him other than assigns names to theorems and definitions that he already intuitively understands.
I might make an analogy to studying music: for the majority of people, it takes a lot of structured study to develop a good ear (i.e. being able to write down melodies and harmony after hearing it). For example, you'll study intervals, chords, and inversions, and extensive practice identifying them on hearing--just as you learn theorems and definitions in math class and do problem sets to practice applying them. But some people innately have a very good ear (e.g. perfect pitch) and don't need a course to teach them to identify intervals and chords. Even though they might not yet know the names of chords and intervals, they already "understand" them.
I think the trick is figuring when underlying details are just details, and when you miss an important understanding in the underlying foundations that might spawn misunderstandings. I guess a good lecturer is one who can tell you - insisting on learning everything is inefficient, although less work for your lecturer..
> My math profs always admonished us to ensure foundations are completely watertight before advancing to the next thing in tiny increments.
Of course someone who gets paid to teach would highly stress learning tidbits slowly and excruciatingly. That's their economic incentive. Schools also stress learning detritus for "learning's sake", even if the very people who teach it can't properly explain what it's actually for.
I also have learned compsci with his similar methods of finding interesting areas and digging in. I know my programming knowledge has holes, and I fill them in as I come to them. I like to know how things fit together, even if they are cross-domain and seemingly disparate. I come on in, and go "see these two areas are pretty similar, let me show you what can be done". And then I look like a miracle worker, because I see the generalities.
Frankly, professors would be more useful to me, if I could purchase their time by the hour over issues I don't understand. I can teach myself most things. Sometimes, a professional helps with the jump-start to get a good grasp.
> Of course someone who gets paid to teach would highly stress learning tidbits slowly and excruciatingly. That's their economic incentive...
Well, I can see that's a factor, but not necessarily an overriding one. As someone who's taught at uni myself (not pure maths) we are not usually that cynical or fond of serving up "detritus". It's not as if we lack for valuable and interesting stuff to teach if we go through foundations too quickly, and we aren't paid just for teaching. Anyway, I think most do benefit from a quite painstakingly incrementalist approach to maths; me taking that too far and sometimes getting stuck is a personal failing.
(I recall a quote by a colleague of the group theorist Simon Norton, who famously suffered a career collapse/hiatus after a series of brilliant results, something along the lines of him having opened a doorway into a wondrous realm of new mathematics, but ending up stuck there, at the doorway, obsessed by the details of the doorframe.)
If I was teaching a linear algebra course, I'm not going to say "by the way you can skip this subject entirely because it will just fall out of your working backwards through Wiles' proof of FLT". For those of us without a once-in-a-generation mind I think the traditional approach is the right one. I was only trying to say, I personally sometimes get stuck and it will be interesting to try the opposite approach.
> Frankly, professors would be more useful to me, if I could purchase their time by the hour
If you go to a good uni, at least by postgrad level you do have that kind of access, and, if you get along, you retain it for free after you leave.
> Well, I can see that's a factor, but not necessarily an overriding one. As someone who's taught at uni myself (not pure maths) we are not usually that cynical or fond of serving up "detritus". It's not as if we lack for valuable and interesting stuff to teach if we go through foundations too quickly, and we aren't paid just for teaching. Anyway, I think most do benefit from a quite painstakingly incrementalist approach to maths; me taking that too far and sometimes getting stuck is a personal failing.
Possibly so, but I never went in any of the grad programs. Most of the lower classes are taught by AI's and contract-based "instructors" paid by the uni on a per credit-hour basis. And much of the time, the department shovels the syllabus and required areas to them for the students.
And unfortunately, this avenue of teaching very much shows. You have instructors who have some semblance of caring, but not terribly much. They teach weeder classes with the intent of failing much of the class. Whomever is teaching isn't always able to explain what's going on in an area - they can do the process, but can't explain why their actions work.
Perhaps it is a jaded viewpoint. But after spending way too much money in "Higher Ed", along with working at an institution, I know the game. And I'm sure it's better if you're a post-doc with prestige or on that track. But the rest of us are spoon-fed bland crumbs these days, and pumped-and-dumped for excessive scholastic loans to get a job to pay the loans back with.
> If you go to a good uni, at least by postgrad level you do have that kind of access, and, if you get along, you retain it for free after you leave.
Yep, and if you're not on that track, the access isn't there. I'd be willing to pay for it directly. Google had a program quite a while back, of paying experts for direct guidance in specific fields. Too bad they cancelled it.
I did have bad experiences with bad lecturers as an undergrad (hello, here is a handout, now I will project the handout on a screen, now I will read what's on the screen without any elaboration, goodbye), but they were the exception. Obviously this depends hugely on the exact institution in question. And yes, many are now increasingly functioning as blandly corporate battery student farms...
> Yep, and if you're not on that track, the access isn't there.
Actually I'd also be interested in such a scheme, now that I'm exploring ideas far away from my original research area. Although if you have a bona fide interest to discuss something technical and specific with an academic who has the relevant expertise, I've found they can be pretty approachable, even if you email them out of the blue to ask for a chat... but I do have the right sort of background to do that I suppose.
How do you identify the "holes" when you reach them? How do you even know whether you've reached one?
"Aha! This is clearly a situation in which a monad would be the best approach. Time to go learn about monads!" Just doesn't seem like a reasonable method to me. Some things you just to to learn well before you can even recognize when you need to remember them later.
That's not a valid criticism if you have intent and will to learn.
As for your monads example, getting into functional programming via things like CLisp, Erlang, Haskell, and the like will expose you to lambda calc pretty darn quick. And learning how monads work is near the beginning of that path.
And just being inquisitive leads to a whole lot of areas that give indications on what to learn. For example, doing computer vision enforces you to learn how linear algebra works. Machine learning teaches a great deal of how statistics works. Finite State Machines have their own really interesting niches to work with. Working a crummy operator job teaches how to do automation (on the sly!).
It really depends on how you approach learning. If you're just slowly grinding away because you have to, going through a 4 year BS degree is probably better.
If one wants to learn anything in the sciences, we things like MIT Courseware, Arxiv, Libgen, SciHub, and "canihaspdf" on twitter. Yes, these are primarily pirate options - so what?
I can publicly see the course projections for any arbitrary degree, along with class titles. And many have book lists linked, so I can hunt for the books online using less legal methods. The only difficulty with some STEM learning paths is they require laboratories - those are hard/impossible to do at home and thusly necessitate academic environments. Computing, on the other hand, is easy to learn even at a Starbucks with a laptop and a phone.
What's stopping people from learning what they wish is primarily time and the will to (and the fact that school does a great job at beating the will to learn out).
> I also have learned compsci with his similar methods of finding interesting areas and digging in
I don't think those things are similar. His method -- working backward for Wiles' proof of FLT to Linear Algebra -- is not really analogous to teaching yourself some undergrad CS. It'd be more analogous to deciding you want to understand Mulmuley's latest results from nothing and discovering while loops along the way.
There's a difference in kind. Maybe this is splitting hairs, and at some level they're both "self teaching", but the huge chasm in relative difficulty/impressiveness still irks me :)
> Frankly, professors would be more useful to me, if I could purchase their time by the hour over issues I don't understand
Find a decent university and go to office hours / ask for independent studies.
Paying tuition is quite literally purchasing their time. Only a small amount of official instructional time is spent in lecture halls. And most professors spend more time on teaching than they're technically required to. At decent universities that prioritize teaching, maybe 80%+ of teaching time is spend in one-on-one or small group interactions.
IME most people who dislike formal higher education never learned how to use it properly in the first place. Or attended undergraduate at colleges primarily known for the attached research institutes, not their undergraduate program.
> Of course someone who gets paid to teach would highly stress learning tidbits slowly and excruciatingly. That's their economic incentive.
Believe me: Most TAs and professors would prefer to teach much more advanced stuff in a much faster pace - but for (good?) reasons they are not allowed to do.
I think most mathematicians feel the same. (Although as the article points out, Scholze did write a 'for Dummies' version of Harris & Taylor's local Langlands paper.)
For perfectoid spaces, the most accessible thing I know of is Jared Weinstein's article in the Bulletin of the AMS.
“You have only some kind of limited capacity in your head, so you can’t do too complicated things.”
A Peter Scholze quote from the article that captures for me one of the two common aspects of programming and mathematics, that we make progress by building better and better conceptual machines (the intellectual equivalents of levers, pulleys, inclined planes, etc.) to multiply our limited limited capacity to solve more and more difficult problems.
Edit/PS: You can see how highly mathematicians respect simplification by the story in the article about how Peter Scholze first became internationally famous in the mathematical community, by discovering a new solution for an already solved problem. His solution was much shorter (and apparently easier to follow,) and that was taken as a sign of exceptional ability, just as it is in programming.
They do, gradually. Good review articles are respected and people put a lot of effort into them; eventually stuff percolates down to textbooks and so on. The kind of thing that an advanced grad student would struggle with a few decades ago becomes an undergrad exercise today. But meanwhile the frontier keeps advancing.
Work in number theory very often deals only with one of two fundamentally different settings.
* Either objects where multiplication with any natural number can be inverted ('characteristic 0', examples for such objects are the rational or complex numbers or "something inbetween the two"),
* or objects where a certain prime number p has a special role; namely, the multiplication by p is the 0-map. This sounds horrible, but it actually has a great implication: (a+b)^p = a^p + b^p, because the middle binomial coefficients are multiples of p. This makes x -> x^p a multiplicative and additive (!) map, the FROBENIUS.
Scholze introduced a way to pull the Frobenius map over to characteristic 0. He could do this 'tilting' in towers and in this way compared the theory of towers in characteristic 0 and p. For details, see his famed answer here [1].
Very soon it became clear, that this tool had remarkable applications and his thesis explored only one of them: a proof of the monodromy-weight conjecture in characteristic 0 by tilting results in characteristic p.
This result alone made the characteristic 0 neck hair stand up :-)
[1] https://mathoverflow.net/questions/65729/what-are-perfectoid...