iq didn't invent the technique, but he popularized it. His website has a lot of tutorials on making demoscene effects, especially for 4k intros. He is also one of the makers of shadertoy.
I was surprised to learn he was behind the Oculus Quill (VR art and animation software). He started posting very well produced video lessons on SDFs. I enjoyed this recent one: https://www.youtube.com/watch?v=PMltMdi1Wzg
You might also be surprised to find out that he was a co-author of shader toy, and he was previously at Pixar where he did much of the procedural work behind The Good Dinosaur.
There's a disclaimer missing from this article (and many others like it), writing a frag shader to generate 3d geometry is generally not the ideal way to actually render stuff unless you're doing demos (with some exceptions).
The other thing that gets me is these articles say "here's how you write a shader" but shaders are much more than a fragment shader over the whole screen (like what shadertoy provides).
So, specifically, he is ray-marching a signed-distance field. Which has some interesting properties.
One of the major down-sides is it's often more difficult to create the boundary surface, fewer editors and the like. More common methods are polygons, subdivision surfaces, and nurbs.
So, one common technique as you mentioned is ray-tracing, the key difference is ray-marching samples the world at intervals (often regularly spaced, but in SDFs you can search for the boundary since given any location, you know the distance to the closest surface) where ray-tracing you intersect each primitive directly, and normally build up data structures (BVH, for example) to minimize the amount of individual tests you need to do.
Another common technique, used almost exclusively in every real-time case (games, CAD tools, etc) Is rasterization, where you project the triangles into screenspace and intersect them with the pixels, the main difference being that your outer loop is now "for each triangle" rather than "for each pixel"
The differences might seem pretty subtle, and in a sense they are. But they have pretty enormous ramifications on the design of a renderer, how they handle content and their performance characteristics.
I think that GP's point is that signed distance functions are not the ideal way of representing geometry. If you are clever, and that's the whole point of the demoscene, your can design a good looking scene in a very small amount of space. However, when it comes to model real world objects or an artistic vision, they are not as effective as triangle meshes.
An alternate method of rendering could use the marching cubes[1] algorithm to convert the signed distance field into a triangle mesh and then render it using more conventional techniques (rasterization).
Typically this is called "ray marching" and it is expensive.
With a more standard forward rendering pipeline, you hand the geometry to the GPU as triangles, then your vertex shader tells the GPU how to move your geometry around, the GPU then handles determining which parts of which triangles are visible, then your fragment shader puts your textures on the triangles and handles lighting them.
While I don't disagree, SDFs in specific have the nice property in that you can converge on the surface much quicker than in standard ray-marching which has a very significant impact on performance, which is why given many scenes, SDF ray-marching can be much less expensive than the alternatives.
One of my coworkers did a basic implementation of SDF (signed distance fields) based on the Valve white paper [1]. He was implementing smoothing for fonts but the work applied to most vector shapes. IMO, it gave exceptional results and he claimed it was a reasonably simple implementation.
I did something sort of similar for the basic menu controls in a little game I made, visible here [0].
It's a fun puzzle to work out how to draw things like a question mark or on-off power icon in a fragment shader. Definitely not the easiest/quickest way to construct a UI though, especially with weak math-fu like mine.
Here I use 2D SDFs along with random mutation to generate images rendered from 1k of data.
It's the Mona Lisa evolving thing again only using SDF shaders.
So signed distance functions turn out to be quite hard to design for more complicated shapes. You can cheat by having the distance far aware from the surface being not quite accurate -- but there are limits to how much you can cheat before you have unwanted rendering artifacts such as blobs or holes.
"Boolean" operations on these signed distance fields pretty much always cheat, except for Union, which can be done with a max(a,b) operation.
I think signed distance fields may turn out to be significantly better than old school raytracing if we can have super-optimized distance field functions!
The union operator also produces incorrect distances for the interior of the union volume, which is a problem for things like shadow cone tracing. See http://iquilezles.org/www/articles/interiordistance/interior..., I've actually been struggling with this in my SDF implementation.
I'm still a big fan of the union operation since GPU rasterizers can trivially hardware accelerate it, which means you can render a bunch of objects in one go and know that it will be performed at near-optimal efficiency by the GPU. I've been combining that with a packing technique to render 3 depth slices at once in RGB (plus an extra duplicate slice in A) so that I can accurately sample any point in space with a single bilinear texture read.
So, it turns out you can actually loose quite a lot of accuracy at distance and not end up with artifacts so long as your accuracy is guaranteed to overestimate the distance to the surface, it will take a bit more time to converge on the surface, but it shouldn't impact the image itself. (Assuming you let the march converge all the way to the limit surface of course, and don't have a maximum steps.)
https://www.iquilezles.org/www/articles/raymarchingdf/raymar...
iq didn't invent the technique, but he popularized it. His website has a lot of tutorials on making demoscene effects, especially for 4k intros. He is also one of the makers of shadertoy.