I have mixed feelings about this. I get the point about touch: i.e. the button should have a fixed size in proportion to your finger. However in other scenarios I am not sure size should be constant. In most cases, you have to factor in the distance to the viewer. This is, probably "projection angle in the retina" is a more useful invariant.
For example, in a big TV that is far from you, you want things to be bigger than in a monitor, that is closer. And in a laptop, you probably want it even smaller.
"Design by pixel" takes this into account somewhat indirectly because most display are designed to have their pixel density in inverse proportion to the size of the display, which is also inversely correlated to the expected usage distance. Sometimes technology changes our expectations of definitions and things become small, until designs catch up with the new expected pixel density. We have worked around this problem with pixel density factors on retina/QHD.
I agree that all this in the is a evolutionary "hack". But I am not sure that the proposed solution, even though more rational sounding, actually makes things better.
An inch should be an inch. A pixel should be a pixel. A point should be a point which is defined as 1/72 inch.
Those are all physical measurements, meant for the surface of the screen. If you design with them, it means you're designing with the screen size in mind. It's often wrong to design with those measurements.
There are relative sizes too.
%, em (current size of font), and measurements relative to viewports. That's what designers use very often.
So you are saying you would want one physical square inch on a phone to be the exact same size one square inch on a projector screen?
I have been designing and building interfaces for a couple decades and I have been down the "relative size" path and learned not to do that the hard way. There lies nothing but tears. Sizing to pixels, for some reason, results in the most predictable and consistent appearances.
Right, the argument is that if you want some arbitrary sizes you should get a new unit for it, not inconsistently use an existing unit to mean something that it doesn't.
The current system of pts or inches varying by context would be like if an oven manufacturer said "Well, 60°F is pretty warm, and 90°F is downright hot, so we'll just recontextualize those numbers for our oven gauge. Bake cookies for 10 minutes at 80 degrees.
Unless you're using some other oven that scaled temperatures differently, in which case bake them at some other temperature instead ¯\_(ツ)_/¯
One could argue that you really are using inches, it's just that everything ever displayed on a screen has an implied "not to scale" note with it, but I don't think that's a particularly useful argument.
Heck, if you're editing something in MS Office it'll show you an 8.5x11 page with a scale meter. Set it to 100% and surely that's to scale.
Nope, 100% of some other totally arbitrary scale. That's what you wanted, yeah?
You should check out OvenBakes. It's a free service that tries your recipe in dozens of different ovens and then ships you the result so you can check for consistency.
Microsoft Word sizes everything for the printer. What you see on screen is merely an approximation. This actually makes sense, since printers will be more consistent than screens.
If I'm editing a Word document, I'm designing something to be printed. If I set zoom to 100% I expect the screen to display the exact size a paper would.
But I'm designing when I'm in Word, it's not necessary or convenient to do all of my work at 1:1 scaling, that's what the ruler is for.
This requires that Windows knows about the exact size of the display, but Windows cuts a few corners there so it's unlikely to be as precise as an 8.5x11 piece of paper.
Compounding the problem is that the operating system relies on the monitor reporting its physical size correctly via EDID and for a lot of vendors (most notoriously Dell) this isn't guaranteed either.
Yup. Best way to test it: take a sheet of paper from your printer and put it against your screen; MS Word at 100% zoom should have the document area match the physical paper size.
> Right, the argument is that if you want some arbitrary sizes you should get a new unit for it
They already exist (vw and vh are percents of the viewport width and height respectively). Renderers "relativised" supposedly absolute units because 1. they already were not correctly handled (as output devices often don't correctly report their densities and renderers would then just fallback to 72dpi or whatever) and 2. they were ubiquitous.
Something like that. The article may be technically correct, but it's putting the cart way in front of the horse. We shouldn't be using "points" to specify type face size in the first place. Because obviously we don't want type to be the same physical size on all display devices.
Sizing to pixels works great when the device manufacturers all use approximately the same ratio of resolution to viewing distance. The problem with that approach is that it doesn't allow for nicer, high-resolution displays.
Sizing to pixels is vital when using low-resolution displays, since any scaling will be unacceptably blurry.
Apple's approach with their Retina displays was brilliant - make the high-res screens an integer multiple of the low-res ones. That made the scaling issues much easier to deal with.
The tragedy is with Windows. Microsoft has long emphasized font hinting, which results in sharper display fonts. But it means that the font metrics don't scale linearly with the resolution. Apps that weren't tested at anything but the default resolution would invariably have rendering issues, so nobody used oddball resolutions. Because nobody used high resolutions, the app makers didn't feel the need to test them. Chicken/egg problem. It has taken far too long to overcome this.
>So the CSS px unit is a bit of a hack, but ultimately works well.
No, it doesn't. That "innovation" breaks programmatic graphics big time. Imaging you are drawing something on the screen, and you have no way to prevent things to alias at those 3x3 CSS pixels
> Apple's approach with their Retina displays was brilliant
Brilliant? Rather "lazy". They jeopardized the entire screen resolution spectrum introducing non-standard resolutions just to fit their vision, producing a new set of problems on its own.
Unless I’m missing something, if your programs are designed properly, non standard resolutions shouldn’t be a problem; Your app layout should flow to fill the space.
Pixels are being scaled all over the place these days, up to 300-400%.
The only reason pixels appear to "work" is that everyone uses pixels, so the software has to be written to lie about pixels. And so they continue to seem to work and people continue to use them to size things.
They might as well just call them "screen fractions" or something, because they sure aren't pixels.
Even points are a lie now. I open a PDF on my computer, and the PDF viewer might scale it to fit the window, and the OS might scale it based on my preferences, and maybe my monitor is a projector which means the OS has no idea how big the viewing area actually is.
They're not necessarily true pixels anymore, but the effect of using px as the unit of measurement hasn't really changed. You're controlling more physical pixels, but a similar size of the screen.
On HiDPI mobile devices, pixels aren't real pixels. They're actually treated like points -- a pixel in CSS ends up being something like 3-ish real device pixels, plus or minus, depending on the device.
Windows: pixels are pixels. Some systems there (e.g. WPF) use also dip units (Device Independent Pixel, 1/96in == 1px in twisted CSS terms). And that's how it should be done in CSS form the very beginning.
MacOS/iOS uses screen units. Each such unit is of N physical pixels. Where N is a number obtained by [ nsscreen backingScaleFactor] function for particular screen (monitor).
I think the article falls into a pattern of web developers demanding too much control.
Fine grained control belongs to the users. The browser is a User Agent, after all.
Users may have their computer configured to display text bigger or smaller than normal. They may have blocked ads. They may have disabled autoplaying video. They might be using Flux or Reader Mode. They might be sitting on a couch, projecting a 6ft image onto a wall.
Just size your stuff in px. How many millimeters that works out to is up to the user.
Most high DPI web-browser/software run into the main problem that the world runs on <= 96dpi screens. So for the act of sainty they all throw their arms up and say, `We assume the device is 96dpi and we scale a pixel to that`.
Consider it this way. When you had a A4/A3 size paper it it natural to talk in point, inches because your A4/A3 sheet of paper doesn't suddenly change to A5/A6, so you're all-ways designing for a fixed size.
Most high DPI web-browser/software run into the main problem that the world runs on <= 96dpi screens.
I'm typing this on a 27″ iMac 5K with 218 dpi resolution. My smartphone (an iPhone 6) has 325 dpi; my 9.7″ iPad has 264 dpi. And you don't even have to use "retina" high-DPI displays to beat 96 dpi. The older 27″ displays, with 2560×1440 resolution, have 109 dpi when you work out the math; the MacBook Air 11″ has 135 dpi.
I think it's fair to say the world ran, past tense, on screens that were 96 dpi and under. But that's just not a very good assumption to make anymore.
There is a A LOT of content being built, have been build by designers for 96dpi. The reason why it isn't a concern for you is because that standard exists, and your webkit knows you're on a 218 dpi monitor and scales to the px unit.
In web design, at least, there are plenty of relative-size units available to create a scalable, responsive design. Em, rem, %, vw, and vh are all very useful for this. In my designs I almost exclusively use these units, with little to no issues.
I believe they are saying that there should be several types of sizes:
a: Relative sizes (to bounding boxes/etc)
b: Exact physical sizes (E.G. points which are 1/72nd inch)
c: Precise device sizes (REAL pixels, not compatibility pixels)
It might be nice to have a way of specifying the minimum and maximum size among those three values, and a priority for fulfilling the min / max request ordering.
Why do you need precise pixel sizes? What if I gave you a 3000 PPI screen? Would you really care to make any feature of your UI be the size of one pixel? Wouldn’t “smallest visible solid dot” as covered by (b) suffice?
Yes, as a gamedev and as someone who has tried to do a lot of fanciness regarding human visual perception, I would love to have the ability to specify "use the absolute smallest, device-sized pixel value."
It's a special case, sure. But those special cases suddenly enable what was previously impossible.
Again, even if you are looking at a 3k/6k/9k PPI display? Even if a single pixel is in no way visible? Isn't there a limit beyond which you don't care?
"not visible" is a very tricky thing in human perception. I'd love a way to subtly mess with someone's eye. And not for trolling purposes. It matters in art.
Me? I don't need that. HOWEVER, I can understand why someone else might.
Mostly these cases revolve around implementing something in program instead of with basic standard features. Some standards even recognize that more complex decisions need to be made and allow for 'shaders' that determine the outcome instead of basic operations.
Providing low level primitives in a responsible interface allows for the corner cases to be approached with nuance and care instead of glaring presence of omission.
At 100% zoom yes. I should be able to take a ruler to my drawing and verify that it is exactly one physical inch. (to the limits of the display). This implies the projector somehow has feedback to the computer some sort of scale factor depending on how large the screen is. When I'm doing CAD, or paper layout this is very important.
Note that an inch (or more likely cm) is rarely what I want. Most of my use case is text where physical measurements are not important. When I'm writing code I want text to scale to be readable. This means web browsers and many other editors should not have any concept of points/inches/cm.
> Fun thing is: pixels don't have a size. They are just 2d locations (with a color assigned).
Sometimes pixels are not even 2d locations. I've seen experimental hardware running hexagonal and other unusual pixel configurations. Vector images look absolutely amazing, even at low resolution, however certain font types (formats, not typefaces) look better than others and some look downright terrible.
> An inch should be an inch. A pixel should be a pixel.
We had that in Win95. You could scale the display for 1:1 dimensional matching with an on screen ruler. Then Microsoft found out how broken everything became if you scaled the display to a non-default value because invariably some GUI elements were built around hard coded pixel dimensions that couldn't adapt when the fonts weren't rendered the same as the developer's machine. So they hid all of that machinery in '98.
Guess what? People are still designing web apps today with px units and the same scaling breakage persists with browsers jumping through all sorts of hoops to hide it.
That machinery is still there, and has always been - it's exactly what happens when you muck around with "make everything smaller/bigger" in display settings in Control Panel.
The problem, rather was that the OS didn't really do much to implement it. That ruler was basically just surfaced to the apps as an API to query, and then they were supposed to apply their own calculations as needed. One exception was the API used to create dialogs (but not other types of windows) from Win32 resoures - CreateDialog etc - these measure widget sizes in "dialog units", which is "an average width and height of characters in a system font"; the idea being that this way, the size of the widgets would naturally scale to the size of the text labels, according to DPI. However, many (arguably, most) Win32 don't use CreateDialog even for actual dialogs, and many frameworks in the past didn't respect the DPI setting at all.
Em may be a relative size, but it's based on a font whose size is specified in points or pixels. Thus it's indirectly a physical measurement too.
Although points are formally defined as a physical size, they really only make sense in their historical context - the printed page.
I think the W3C definitions make the most sense when relating measurements to displays. You are encouraged to use definitions that are flexible but consistent with the context of your display. https://www.w3.org/TR/css-values-3/#absolute-lengths
> Em may be a relative size, but it's based on a font whose size is specified in points or pixels. Thus it's indirectly a physical measurement too.
A size specification that can be correctly converted into any amount of meters at all is most definitely not a physical measurement. A unit that has a conversion factor to the corresponding SI unit for the respective dimenion that can be chosen completely arbitrarily is certainly not a phyiscal unit of measurement.
While the conversion from points/pixels to em is not fixed, it is fixed for any particular font. As the pixel and point sizes change, so will the em - they're directly proportional except for any font hinting your browser might apply.
In other words: They are not a physical size measurement.
The definition of physical units is precisely that they are relative to only one unambiguous standard which relates it to all other dimensions of the universe. Distances, for example, are related to time. As such, one meter, as a unit of distance, is the distance that light in a vacuum travels in 1/299792458th of a second. A unit that does not have such a fixed relation to time is not a unit of distance by definition. So, unless increasing the font size in your browser also slows down time, the em is not a unit of physical size.
There are many more perfectly fine ways to specify how to derive a physical size relative to a given situation, but that does not make that specification a physical size itself.
OK, I'll grant your point. But Em is much more closely tied to the other physical units than intuition would suggest. It's one step removed rather than being completely independent.
> In most cases, you have to factor in the distance to
> the viewer. This is, probably "projection angle in the
> retina" is a more useful invariant.
That's actually exactly what CSS specifies:
"The reference pixel is the visual angle of one pixel on a device with a pixel density of 96dpi and a distance from the reader of an arm's length. For a nominal arm's length of 28 inches, the visual angle is therefore about 0.0213 degrees. For reading at arm's length, 1px thus corresponds to about 0.26 mm (1/96 inch)."
Of course this means "pixel" is kind of a misnomer, but at least the semantics of the unit are about what you want.
What is written in CSS specification may work only if CSS has any means to measure distance of observer from display surface.
Like @media distance(1m) {}
Otherwise it is just a good wish like "hope you have normal eyesight".
The only information that is available to CSS is that it gets rendered on some media. And so 100mm shall be 100mm measured by the ruler on that media surface. Monitor or printer. Dot. No other options.
> And so 100mm shall be 100mm measured by the ruler on that media surface. Monitor or printer. Dot. No other options.
So, you connect a projector to your laptop, point it at a 20 m screen, open a web page, and then ... the browser has to scale the web page to a width of 10 pixels without any options to adjust things for the user?
The projector is a great example. The projector generally will know neither how far the projector itself is from the screen nor how far the viewers will be. So specifying how large the type should be on the screen in inches is obviously not a useful thing to do.
Phones already have proximity sensors and facial recognition. We're just not using them to scale pixels yet.
But for most intents and purposes, it's enough to assume a "typical" distance for the type of device in question. People with bad eyesight are free to place the device closer to their eyes, zoom in, or adjust a system setting that makes everything look larger.
Exactly. It doesn’t make sense to talk about this topic without reading and understanding that spec first. The “px” unit has already been virtualized. It’s already based on visual angle (or the idea of visual angle), and notably not absolute display distance (because of TVs, projectors, and now VR goggles).
Note that there's slightly more choice here than just that: 1px may be either (1 physical real-world inch on the output medium)/96 or set related to the reference pixel. (Though in CSS 1in always equals 96px, etc.)
TVs and projectors should be an exception. They are always mentioned as an objection when this comes up, but I think there is no problem to limit resolution to certain devices where it makes sense.
"Non-technical" people don't understand why 1cm isn't the same on screen or printed out. They hold a piece of paper to the screen and are frustated.
That 12pt means something different on Mac than on Windows is also confusing - and the exact handling changes between apps. I think Word makes it so that the printed out size is the same (it has to be), but the displayed size is smaller. On my Mac, "100%" zoom doesn't fit a piece of paper, but "135%" does. On Windows, it is about "115%" for me. Other apps have subtly different behavior.
It would be really great for multimonitor setup if stuff would have the same physical size on each screen (and while we are at it, color! Even a very basic per-model color calibration would be better than nothing). It would be really useful if you could display something at an exact scale on a tablet, and maybe hold a real ruler up to it.
I understand why this stuff is hard, I've worked a bit on related things myself. And to be honest, most people won't probably notice. But I still think it would be a good usability and ergonomics improvement.
No, they are just extremes making the point unmistakably clear, if you decide they're exceptions you just refuse to accept the issue exists at all.
The exact same issue exists going from a smartphone to a tablet to a laptop or desktop computer, just to a lower degree. If you're not solving the issue such that it works from a smartphone to a projector, you're not actually solving it you're just providing one more bandaid, of which we already have plenty.
The difference is, on a projector I want the content to be blown up. That's the whole purpose.
On the other hand, I want text on monitors to look like printed out, and hung up. Real paper doesn't scale with viewing distance. Instead, there is foreshortening - stuff a bit further away looks smaller, but I know it is the same size.
It doesn't mean all text should be the same size ever - on the contrary. If I sit further away, I want to choose a larger font size or greater magnification.
All I want is, when dealing with physicality - stuff to print out, or stuff behind a touchscreen that pretends to be real - I want to be able to set a slider to "100%", and put a ruler or comparison document up to the screen, and have it match perfectly. I want stuff on dual monitors and large tablets to be the same size by default.
I needed these features already several times this week: I printed out a document just to get a better impression of how the font size looks on paper. I also had a scan of an architectural drawing on milimeter paper (1cm = 1m), and wanted to compare it with a paper blueprint - if it was the same scale, I could just hold a ruler against it.
I've been experimenting with the concept of using subtended angles on the retina. For devices that can sense the distance to the user, we'd use that, otherwise you make an assumption based on the kind of device.
That way you should be able to get sizes that appear roughly the same size regardless of whether they are on a huge screen far away or a small screen up close.
IMO this is easily fixable by making any display output its exact pixel size as some sort of universal standard. For example, 1 pixel = 0.0147 cm. And vice versa. Together with the screen diagonal size (and maybe some flag whether it's a tablet or desktop, etc), you have all the information needed. You could perfectly adjust websites for different use.
On a Mac (probably elsewhere, too), you can open an A4 print PDF and have it exactly the same size as the output, to the millimeter. That is so helpful.
Such a standard exists, but it measures the total display area across the entire width and height of the display. Thus the pixel to pixel gaps become negligible compared to the imprecision of humans interpreting illuminated pixels.
We should standardize on something, but it shouldn't be inches.
You might be looking at a phone screen a foot away, or a monitor two feet away, or a TV four feet away, or a projection 10 feet away, or a jumbo display 100 feet away.
How does measuring in inches help? Answer: it doesn't at all.
Measuring in points used to be a good system, back on the Mac in the old days. The user interface was in 12 pt Chicago, new Word documents defaulted to 12 pt Times New Roman -- 12 pt was the baseline for normal body text both for interface and for content, and everything else made sense relative to that.
Then what happened? For some reason, web browsers defaulted to 16 pt. So the text on web pages was too big.
Then, laptop screens started packing more pixels in (this is pre-retina) to advertise higher resolutions, and OS interface text became physically smaller -- really hard to read. 16 pt webpages were actually OK though, so now that kind of made sense.
Strangely, at the same time, interface text got even smaller -- look at the tiny text OSX now uses in a lot of dialog settings, or the small size Chrome uses for tab titles. (Menus and buttons are usually still OK.)
And then bloggers wanted to make webpages easier to read, so they started doing things like 18 pt text (e.g. Medium) and sometimes you see 20 pt or even 22 pt. So now, each letter on Medium takes up about three times the area of a letter in the title of the tab I have open on Chrome.
Scale on computer screens no longer makes any sense, and you have to constantly use some combination of monitor resolution and browser zoom to keep all the elements of your screen in any kind of reasonable proportion.
So my humble suggestion is: can't we just go back to where 12 pt meant normal computer screen UI text and body text, and everybody stick to that? Forget that points are based on inches, just make everything relative to 12 pts = body text. Then everyone can pick a resolution or zoom level for each device so it's legible for your eyes at your distance... but then everything stays in proportion!
I've argued in the past[1] (and there were a few good comments in that thread, in response) that most UI elements should be measured in arcdegrees they should consume in the field of view of the user. That is, how large should this text appear, not how big should it be physically rendered, either in pixels or inches. (So some derived unit, like pt.)
As I stated in that thread, the display device would need some concept of its own dimensions (easy) and its intended viewing distance (harder, and would probably need to be configurable).
The one commenter in that thread notes, I think rightly, that,
> once you get to very small screens like phones, there is a tradeoff between keeping font and UI sizes comfortable, and being able to actually fit enough content on the screen without endless scrolling. I am willing to strain my eyes with smaller font sizes on my phone than on my laptop, just so that I can see more than 5 sentences of text at the same time.
I think is particularly applies as blog fonts get larger and larger.
> web browsers defaulted to 16 pt
Are you sure? I don't ever remember this being the case; it's always been 12pt (usually a serif font, often Times New Roman). Are you sure you're not mixing that up w/ 16px (which I believe in CSS, 12pt == 16px)?
You are right for projectors and TV screens, but otherwise I think I disagree.
Paper also doesn't take viewing distance into consideration.
If my screen is 80 cm away, and my colleague's is 60 cm away from him, should things be smaller in pixels on his screen? What if I walk up to his desk? It would be really hard to compare sizes. The thing is, the brain already corrects for optical forshortening. It's normal that things that are further away use less arcdegrees, or space on the retina.
Now we might be talking about different things. Sure, text in a book or on a phone can be a bit smaller than on a screen further away. That's a large-order effect, effectively you are using a different stylesheet, or viewing a different document. What I and other resolution-independence proponents are talking about are small effects, when viewing the same document on similar devices. A multimonitor setup, a large tablet, your colleagues' screens. In all these places, you should be able to hold up a ruler and have 1cm=1cm.
> In all these places, you should be able to hold up a ruler and have 1cm=1cm.
I don't necessarily agree, I guess. If all of the monitors are set up at the same viewing distance, then yes, 1cm=1cm if stuff is measure in arcdegrees. But if they're not — say one is a larger device that's typically viewed further away — then no, the absolute physical length of a displayed item should, again IMO, increase to compensate for the distance.
I think this is a better system than today's, which seem to be mostly a hodge-podge of bad assumptions that the screen's DPI is always X, and fiddling with UI zoom settings until something respectable comes out.
I am not saying that there aren't good use cases for a display device to output actual cm/in, however: for example, if I want to preview how a printed document will actually physically look, I absolutely think there should be an API to say, "No, I don't want to display this is arcdegrees, I want this thing to be absolute 8.5 in by 11 in." and then in that case, yes 1cm=1cm.
That is, it's contextual, but I think arcdegrees provide a better default for the majority of use cases.
(0) From the display, get ≥ 2 of: physical display size, ppi, and resolution. This is the hardest step; the display hardware must accurately report its physical size or physical sizing will not work.
(1) For a glyph of logical size `x` in points, calculate the size `y` in pixels that would render it with physical size `x` on the display.
(2) Scale `y` based on the expected distance from the user's eyes to the display. Specifically, the software chooses a scaling factor such that the glyph has the same apparent size to the user as if it was displayed 18" from the user's eyes with physical size `x`. (18" is about how far away a person would hold a printout when reading it.) The exact reference point doesn't matter much; it just has to be consistent between devices. This step is so a given logical size will look pretty much the same across devices even if they have different form factors.
(3) Apply user preferences on a device, app, and document dependent basis for additional scaling as usual to accommodate user needs and preferences. Hopefully, with steps 0–2 implemented, there will be less need for users to fiddle with per-app and per-document settings.
[1] This is essentially equivalent to "use arcdegrees", just unpacked a little.
1. What if I need to print some form and 100mm box shall be precisely of that size on paper. How would your default-font-relative-length-system work?
2. What if I have touch screen and need to make that button to be clickable by a finger? Like: button { width:1in; height:1in }. How would font size (consumed by eyes) be related to size of the button on touch surface (consumed by a finger)?
Stating opinions like facts, the standard units should be:
1) Arcminutes at typical view distance. The sun is 32 arcminutes and a css pixel is 1.278 arcminutes. This is the primary unit for measuring content.
2) Percent. This is the primary unit for measuring layout. The number of arcminutes per percent varies from screen to screen in the same way that screen sizes vary, and can be dealt with using the same responsive design tools.
3) Pixels. A pixel is the smallest size that can be drawn in this medium (it could be an actual pixel or an inkjet dot). A 1px line possibly useful as the finest possible thickness, but otherwise you should only use this if you have a specific reason, and mixing it with other units invites madness.
4) Meters. Literal physical size. Like pixels, there are specific reasons to use this, but even fewer of them, and mixing it with other units invites madness.
What is the interaction with zooming?
You can't just deny zooming. Users want it, and if you disable it, someone will still monkey-patch it in.
A first-order solution is to have 'zooming' be pure scaling. Essentially, instead of ISO defining a milimetre, the user defines it, and it defaults to the ISO definition.
This essentially means the system has to handle a screen-size that could change at any given time. Unless you are willing to accept interpolation.
But then, what happens when real-distance starts to matter. A stupid example is a ruler app, that obviously breaks upon scaling.
Generally, any app that depends on `real hardware' interacting with the screen, this is going to be an issue.
Specifically, this is an issue for touch screens. Even more so because 'human hands' are `real hardware'.
If two buttons are rendered 2cm apart, and the user scales by a factor of 2x, that would put the buttons 4cm apart. That might be too much of a spread to use comfortably.
In general, does it really make sense to treat a 14" screen scaled at 2x as a 7" screen? What about a 15" screen scaled at 3x?
If you accept the above as a problem, there are many solutions, but they are either a lot of work, or require 'monkey patching'.
Sadly, this means the quote "Resolution independent graphics is a solved problem." not quite right. At least insofar as any chosen system will have trade-offs between ease of development and ease of use.
Taking a pessimistic view:
Those trade-offs will inevitably fall towards development on occasion, which will train users to think this is stupid anyway. Thus developers get less benefit, and we will see weird monkey patching to attempt to fix this. Maybe at some point, someone walks in and declares "this is a mess, it should have been done right from the start".
Yeah. These accessibility vs. design issues were what immediately came to me mind to. Though, as simply another tool in the bag, accurate real-world sizing capability seems like it could be used too actually help those issues.
Would like to hear more from designers that on the issues and challenges of creating accessible designs.
It is a best practice (and required by certain WCAG 2.0 guidelines) to allow users to adjust the text size to meet their needs. The 2.1 AA guideline says that websites must scale up to 400% and still be usable.
In CSS, I almost entirely use relative units so that I don't need to worry about how things will scale. If the user wants/needs their text to be larger, they will also want/need [any other element] to be larger, as well.
Of course, if you let your buttons scale as much as your text does, you can end up with buttons taking up the entire screen. I typically solve that by using max-sizing on an element, rather than using `in` or `cm`...I can't recall ever using physical units in web design.
there is a much less visible, less obvious problem that would be seriously exacerbated by pursuing this.
most software, including iOS, OSX, and windows, has been doing antialiasing of fonts with the wrong gamma for years. this wrong gamma has been masking another problem: scaling a glyph shape without adjusting its weight is the wrong way to set type at a particular size. up til now, font hinting, and the wrong gamma has fudged the appearance of font weight to be “almost right” on low resolution displays. but on high resolution displays, you begin to see a font’s true weight more accurately, since the effect of wrong gamma and font hinting is significantly reduced. the result is small type sizes appear way more light than it “feels” they should.
Apple chose to solve this in a really unfortunate way for devices with high density displays: it applies a fake bolding effect to approximate the effect of the wrong gamma antialiasing. comically, they apply the wrong amount of fake bold, such that the fake bold affects type set at larger sizes way more than it should.
in conclusion: yes, let’s make 12pt the same everywhere: but we will also need to fix all these other hacks on hacks at the same time.
Apple does apply gamma correct blending, at least when rendering text to a solid surface so that subpixel alpha kicks in.
If you render it to a transparent layer and force an RGBA blend, it gets blended in sRGB color space rather than linear RGB, but this is a flaw that can't be easily changed because of legacy reasons. Any design that uses transparency of any kind, text or graphics, would change appearance. I agree that we should have a body { color-space: linear; } though, but good luck in explaining why to coders and users alike.
The fattening that you are complaining about is something else entirely: it mimics the way physical ink bleeds on paper. The weights were under designed to compensate for this, and they replicated this effect virtually.
Classic example of physical ink bleeding is the Courier New font on Windows. It's so light as to be unusable for a lot of things. Why? Because it was rasterised off an IBM Selectric typeball, and they failed to account for the ink bleed that occurs in actual typewriter use, unlike whoever designed the typeball.
Is the "wrong gamma" problem that (pixel intensity)^gamma is proportional to luminance, but the software incorrectly does its antialising calculations directly on pixel intensity? Or something else?
Yes, that's basically it. I think there are at least two different mistakes that can be made.
When a font is rendered you usually calculate the per-pixel coverage. Mistake one is to interpret those coverage values as if they're sRGB colors (so 50% coverage would be #808080 which is a rather dark gray in sRGB, not mid-gray). That makes text look heavier than it should be, and makes diagonal lines look jaggy.
Mistake two is to blend sRGB colors directly rather than converting them to a linear space first. That can give colored fringes on blended images, because the RGB intensities get skewed.
They were replicating the look of small fonts on CRTs where black text on a light background needed thicker lines to compensate for the bleed from neighboring active phosphors "shrinking" the apparent weight.
Bill Gates made this same promise when Windows 1 was released. That each monitor and printer would require drivers specific to their model in order that the fonts would be measures in points (or pica) instead of in pixels.
Somehow we ended up with all the drivers for each device even though the original promise has been long forgotten.
We sacrificed the days of the plain printer with Centronix hardware interface, and the versatility of the Unix lpr spooler where you simply sent a stream out to printers containing control codes you knew would work with your device, like carriage return and linefeed. Then later the Calcomp and HPGL codes.
And what did we gain for that trade?
I think this link is great, because someone is clearly remembering the promise. Upvote for that.
I ran across this article today while looking for information on PPI Awareness of web browsers. In particular, I'm looking at Hyper, an Electron-based terminal, and how I might be able to get the same type size from monitor to monitor.
As web applications and high-DPI monitors become more common, I can see more practial need for a measurement which is truly resolution-independent. I believe points really should have been this measurement, but were co-opted early on with Windows and Macintosh specifying a specific PPI (96ppi and 72ppi respectively) as a function of the OS as opposed to what the monitor actually reported.
Can we fix how points are handled, or do we need to go to mm or some other measurement at this point?
> Can we fix how points are handled, or do we need to go to mm or some other measurement at this point?
I'd be happy to go with the millimetre anyway. It seems to me that the cleanest way to look at the _point_ is as simply a unit of length, satisfying, in its modern "desktop publishing point" incarnation, 1 pt = 0.3528 mm = 0.01389 inches. From that perspective, any unit is as good as any other, and I would personally use the millimetre. Using anything other than the point might even be preferable, if it removes some uncertainty about how it should be interpreted.
X11 allowed resolution independence (run 'xdpyinfo | grep dimensions' and the report includes the physical size, which will almost certainly be wrong) and in the days of pixel fonts shipped with two sets, for 75dpi and 100dpi, corresponding to the two common display densities of the time. The ‘free desktops’ could have built on this when vector fonts took over, but ignored it in their zeal to imitate Windows.
> but ignored it in their zeal to imitate Windows.
You are letting your zeal affect your judgement.
Of course the free desktops did explore the resolution issue. They came to similar result that the proprietary counterparts did, namely that:
1) hardware lies about it capabilities. There are entire product lines that use the same EDIDs. For manufacturers, that it simpler to just copy an existing ROM for a new product, that to bother with customizing. Customers do not care anyway.
2) hardware may have no knowledge at all, what the pixel density is. How do you imagine, that a projector measures and reports it's density? In the case of a projector, it might even change mid-flight, with the intent to change the entire size of the picture, not it's sharpness.
3) meanwhile, there were practical concerns with software. What kind of bitmap assets are you going to recommend to developers to prepare? They won't ship for the entire continuum of densities. That's why Apple approach was practical - just ship two or three.
This predated EDID, and digital displays generally. The point was that Gnome & KDE actually regressed from base X11 and painted themselves into a 100dpi corner.
Technologically, scaling for physical display resolution and scaling for legibility are substantially the same; done right, either would enable the other.
> I believe points really should have been this measurement, but were co-opted early on with Windows and Macintosh specifying a specific PPI (96ppi and 72ppi respectively) as a function of the OS as opposed to what the monitor actually reported.
> Can we fix how points are handled, or do we need to go to mm or some other measurement at this point?
Points and millimeters are handled identically on all systems I know of. Point is not really any special unit, it is just 1/72 inches.
As for the overall theme of the article, at least at some point Firefox was perfectly able to do completely resolution independent rendering, realizing the authors wish that 12pt should be 12pt. I haven't really followed the situation lately, but I wouldn't expect the situation to be significantly worse these days.
One problem with mm as a unit for type is that it's not immediately obvious what we're measuring the length of. The point specifically refers to the maximum height of the minuscule 'm' or something like that.
Then with things like antialasing and hinting, the intrinsic size of a font ends up both physically and psychologically a different size.
I'm not yet convinced the problem is as clean and obvious as it seems at first glance.
Well generally typeface size is measured from the top of a capital letter (cap height) to the lowest point of the descenders, regardless of the units used for measurement.
And you don't even have to get to antialiasing and hinting to see issues with perceptual typeface size... Look across faces and you'll see wild differences in cap height, ascenders, and descenders relative to the x-height.
I started messing with getting the real pixels per inch in electron here, maybe this project helps? It's not ready for primetime except on macOS.
https://github.com/francoislaberge/lifesized
Let the user specify a zoom factor, but then 12pt text zoomed by a factor X is the same on every display.
There is some issue with how this should interact with certain things (like touch distance). Should two buttons that are 2cm apart be 4cm or 2cm apart after scaling by a factor of two?
There is a fringe case of a display-based ruler, that essentially shouldn't zoom.
Ideally you'd want 2 zoom factors, one for visuals and one for touch. But then you'd potentially have buttons that couldn't hold their labels anymore. I think one zoom factor should be sufficient.
Given that we can't support actual-size rulers today, I think we've demonstrated that we can live without them.
P.S. for low resolution displays it's obvious that clear text and graphics is more important than absolute size.
A "point" is a unit of measure. If a person wants to zoom in or out, it's up to them, but by definition a point is 1/72 of an inch.
We don't change the definition of a mile based on the type of vehicle travelling it, and we shouldn't change the definition of a point based on the type of display viewing it.
Also, a calorie may be 1000 calories (kcal) depending on who is writing it. It's clear that nautical miles aren't miles. Not so clear for calories and maybe points. What should we call them? Cyber points? Relative points?
The problem is that if you want this, you now decide that scaling is not on integral or simple fractional scales (i.e. 2x or 2.5x), but on any arbitrary floating point scale (i.e. 2.0245x), which makes shipping bitmap assets generally pretty problematic.
We decide that, in order to make writing applications easier, we will _approximate_ font sizes, because in most applications, it's Going To Be Okay. It sucks that certain ones like DTP apps get the raw end of the deal, but at the end of the day, it's better for most people using computers that we don't try to perfectly abstract away the exact pixel density and dimensions of your display.
Floating point scaling for bitmaps sounds bad, but often it's actually OK. It helps if PPI is very high, which these days usually it is. The iPhone 6 Plus (and I imagine the 7+ and 8+) scales its screen in hardware from 1242x2208 to 1080x1920 and it looks fine.
If you really need UI graphics to be completely crisp, you can always use vectors.
If you are designing an application today, why would you use bitmap assets? Logos, buttons, icons, and other UI are all going to be designed in a vector graphics program, so just export them as SVG and be done with it.
SVG is even supported well enough in web browsers to use without issue. Take a look at the Stripe website for example, most of the images there are SVG.
Let's back up a little and talk about pixel sizes:
Suppose I render "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX" in DejaVu Sans Mono with a normal font-weight: once at size "12px", and then again at "11.65px". (Note: those are pixel sizes, not point sizes.)
Assume that string is all a single non-wrapping line and rendered to an HTML5 canvas using a web browser, and that I have loaded some known version of my font using the "@font-face" syntax in CSS. What should the width be in each case when I measure using the "measureText" method?
If a given platform's font stack renders that string substantially wider-- say, 7 pixels wider-- than every other platform (all of which return a values within half a pixel of each other), is that a bug?
If so, is there a specification somewhere that requires a platform's font stack to produce a pixel-sized result for a pixel-sized font within some threshold? Or are font stack devs within their rights to justify whatever the metrics are based on other aesthetic or perceptual qualities of their renderer?
That happens all the time, and unfortunately it isn't regarded as a bug. There's no specification or agreement that requires font renderers to match up.
Font rendering used to be all about bitmaps, then as TrueType matured it was all about aggressively hinting (and auto-hinting) to fit the pixel grid, and we haven't fully gotten away from that.
Of all the main OSes, I think OS X and iOS do the least hinting, but I don't know if they do no hinting; and many people dislike the blurry text on OS X. (I don't see any complaints about blurriness on iOS, though, where you always have really high screen resolution now.)
It would be technically feasible for all font renderers to product identical output (or at least to provide such an option) but nobody seems to care enough to push for it.
I think all mainstream font renderers do have the ability to produce identical output in terms of metrics, at least. E.g. on Windows (DirectWrite), you have "ideal layout" as an alternative to "pixel snap", and that should do the same thing as OS X - not pixel perfect, but dimension-wise.
The problem is that the output sucks on low-res displays, like most PC screens. Mac users are kinda used to it (and Apple uses bigger fonts to compensate somewhat), but for Windows users, it's a major eyesore when someone tries to use ideal rendering. For a long time, WPF did that, and it was constantly reported as a bug - indeed, I filed one of those bugs myself (https://connect.microsoft.com/VisualStudio/feedback/details/...). They actually had to fix WPF to allow for pixel-snapping.
Ultimately, this problem will only go away for good once everything is high-DPI, including desktops and laptops. Ironically, the reason why WPF only supported ideal layout initially, is because the team thought that this would be the case by the time they released the final version (the design process started somewhere in 2001, IIRC). It's obvious now that it's not going to the case anytime soon - indeed, many cheaper laptops still use 1366x768. Hopefully we'll get there eventually.
Thanks for the reply-- it's what I gathered from doing a little digging into extant font stacks but wasn't sure if I had skipped over an obscure specification somewhere.
So here's the deal-- maybe a year ago, I tested Windows, OSX, and old-school GNU (via Ubuntu 14.04) and got font metric results within a half a pixel of each other for the test I outlined above. New-school GNU, however, was about 7 pixels wider. Again, this was for the same version of a mono-spaced font, loaded from the same font file shipped with my test program.
I also got reports from users of other GNU/Linux distros about the discrepancy-- these were distros that also used the newer GNU font stack. Since that time I've seen reports about this discrepancy affecting other software.
One final data point-- I also noticed that the newer GNU font stack seems to quantize pixel sizes that aren't whole numbers. For example, assume size "11.3px" generates a particular output. Then "11.33px" would generate the same output, and only when you get to, say, "11.37px" do you get a different output. Not sure if those numbers accurately reflect the quanta, but it was something like that. And again, this only happened with the newer GNU font stack and not with any of the other platforms. Now maybe there's some super beautifully readable output that comes from that algo, but OSX's gold standard of readability doesn't seem to do that...
I didn't take the time to narrow this down any further than that, nor drill down to figure out what part of the font stack causes the discrepancy, or when it happened. Without a spec to which I can refer I'm 99% sure I'll get a response from a font-stack dev that this is the way fonts work and it's my app that is broken. And confirming that likely response isn't worth the effort it would take to precisely describe this bug on an issue list.
As an aside-- my solution to this in my program is to do a sanity check at startup, do some sizing adjustments to make sure fonts fit appropriately into their boxes if necessary, and print out a message for users when the oddball GNU situation is detected. I have to print the message because the diagrams printed in the program are supposed to be pixel-exact, and oddball GNU users will notice extra space at the end of the boxes. (Which is a feature, because if those boxes were tight fitting oddball-GNU-font-stack users would potentially move them closer together and generate overlaps for sane-font-stack users...)
The best size is dependent on the distance between the eye and the content. You would use letters less than 10mm on a smartphone screen but you could very well use an inch or more on a big TV screen. As a designer you have no idea whether your web page will be displayed on a tiny screen of a low-end smartphone or a big TV.
I know printing is not in Vogue anymore, but there are still requirements from customers that need printing. The CSS spec for printing is beyond dismal. No two printouts from different devices let alone browsers look alike or have the same page breaks.
I wish default sizes didn’t matter so much. In theory, anything that’s too small can be Accessibility-sized to something that you like better. In practice, you just give up after encountering a bunch of “Butto…” with “Titl…” and decide to suffer through whatever improper size was chosen. (Or if you’re really unlucky, it’s one of those interfaces where improperly-fitting text just plain disappears without leaving anything behind.)
The number one rule with text should be: respect the user’s choice.
For a very detailed discussion of font rendering from the historical compromises that caused sever line length problems, and how to properly render with reliable sizes at at any display ppi, I recommend reading this[1] paper by Maxim Shemanarev and the AGG project.
A key reason we have font size problems today was Microlsoft's decision aggressively hint their display fonts. The hinting was so strong, many glyphs had their horizontal position forcibly quantized on the pixel grid. Thus 1% changes in font size might round to different pixels, with unpredictably and error accumulating into huge differences in length for each line. Scaling font size could make dialog unusable, so a lot of software used fixed px sizes so the GUI was predictable.
Microsoft finally addressed this problem in recent versions of Windows, but it will take time to undo the "Only 96 ppi exists" damage. Font rendering is complicated!
But which definition of "Point" should be used? It's ambiguous (see https://en.wikipedia.org/wiki/Point_(typography)#Varying_sta... ). At this point, if you demand length equality, use well defined absolute units (i.e. mm, cm, in). However, you've going to have to make some difficult decisions between using absolute units and small-screen support. Relative units permit dynamic rescaling but in so doing one has to give up the explicit length equality demand. If you really want to, you can use device/media queries and varying stylesheets to get around this; but for all that is holy, please don't demand 12pt be 12pt everywhere -- we can't even all agree on what a point is without (implicit and unreliable) context.
> Computer hardware and software people could never figure out a reliable way to communicate the diagonal display size. It was just easier for the hardware and software to communicate the screen resolution and just punt on factoring in how packed the pixels were when rendering computer graphics. There was never commitment by the computer industry to deliver display output that matched real-world measurements.
No, this is a HTML/browser problem.
Since the days of DDC (late 90s) this worked plug-n-play. If, for example, you set a PDF viewer to 100 % then you will practically always get a 1:1 size reproduction that is basically dead nuts on.
I get the rant about font sizes and i agree with it, however the OP should keep in mind that displays are of wildly varying sizes and aspect ratios, and physical measurements don't really help much.
For example, I'd rather have the 5 inch box be 3 inches on a 7" tablet, and 5 inch on my 24" monitor.
If you go the path of pretty design with typographical conventions you end up with those web pages that show 2 lines of text on mobiles and take up 15% of your screen on a 30" monitor.
A point is a terrible unit for displays. No point in attempting a solution when ‘px’ is a viable solution already. Use ‘px’ where you can and then use ‘pt’ for your print styles.
That's what puts me in the situation that caused me to start looking into this, though. I've got two displays; one high-dpi and one not. Without scaling the entire display on the High-DPI display, I can't use my terminal app on both screens because the type will either be too big or too small.
If points were actually points, I could use a 12pt or 14pt font on both displays and be perfectly happy. I'd still have to deal with how it changed the number of available columns, but that's a solvable problem.
What's wrong with scaling the entire high-DPI display? At least assuming you mean an OS-level thing that scales up font sizes and UI elements, not just scaling the raw pixels.
I don't want to scale up all of the UI controls just so that I can have type that's readable. I can fit a lot on the screen by keeping the scaling level down and just zooming into web sites when necessary, but it's a lot harder to do that with a terminal app.
Hmm, I still don't quite follow the problem, sorry.
If you need to scale up the terminal text to read it, don't you still need to scale up the other parts of the UI, so you can see those too?
On the Mac at least, it seems to me this stuff works reasonably well. You can set the overall scaling per-screen, and you can set the (scaled) font size used by the Terminal. It's really not clear to me why you'd need to change both the UI scaling and the font size on a particular screen.
You make a good point. I'm going to try scaling up the screen for a while and see how it goes. I can already see that the title bars, buttons, etc. are bigger than they need to be, but maybe not as bad as I originally thought.
Yeah, for a while I insisted on setting my "retina" Macbook to its full resolution, but once I started using a second (non-retina) screen, I grudgingly scaled it to a more reasonable size and haven't looked back.
I was kinda hoping that this would be about how HN uses too small fonts for everything so that I have to zoom to 120% before it becomes comfortable to view on my laptop.
I was going to debunk this, and suggest that you just set your default font size in your browser to bigger than 16px/12pt, but having just tested it, yes you are right, comment text is locked at about 13px in the HN CSS. That's a bit naughty. :(
So... it should be 5 inches on a 52" screen and also 5 inches on a 7" smart phone screen? I guess users are gona have to find a way to wrap the rest of the 52 inches around their face when trying to read that example text other wise it's wasted space.
Looking at the problem holistically is not optional when it comes to perception.
The biggest offender is is Outlook on macOS. You can pinch to zoom when editing an email, and it will remember your zoom level for subsequent emails created.
There's no way to reset it to the default, and so no way to tell even roughly how big the text will look for the recipient (screen size notwithstanding).
The only reason pixels appear to work is that everyone uses pixels in display setting, so changes to seem to work and people continue to use them to size things.
They might as well just call them "screen resolution" or something, because they sure aren't resolution.
The only reason pixels appear to the display settings. And so they change to seem to work and people continue to use them to size things.
They might as well just call them "screen resolution" or something, because they sure aren't resolution.
So this person is proposing that every random site gets to determine the size I want to have my font and wants to make it the same regardless of device? Way to only think of the designers.
> If they're not the same, then they're not inches.
I'm not saying that UI elements should necessarily be the same size, but the option should be available. Having a point be equal from display to display doesn't mean that relative scaling isn't impossible; it just means that you need to use different units for those cases (which is probably most of the time). Points really aren't all that useful on computer monitors at present; you're better off using EMs, percentage or possibly pixels depending on what your goal is. This leaves a gap, however, when it actually is advantageous to know for certain what size a UI element will be at display-time.
Step one would be to persuade manufacturers not to lie about device dimensions in EDID. Then software would have a fighting chance of knowing, how many pixels are in an inch.
For example, in a big TV that is far from you, you want things to be bigger than in a monitor, that is closer. And in a laptop, you probably want it even smaller.
"Design by pixel" takes this into account somewhat indirectly because most display are designed to have their pixel density in inverse proportion to the size of the display, which is also inversely correlated to the expected usage distance. Sometimes technology changes our expectations of definitions and things become small, until designs catch up with the new expected pixel density. We have worked around this problem with pixel density factors on retina/QHD.
I agree that all this in the is a evolutionary "hack". But I am not sure that the proposed solution, even though more rational sounding, actually makes things better.