Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

This may not be the correct answer but remember that the "dot density" is much smaller on screen than on printer.

The most frequent screen density is 96dpi while printers play with 300 or 600dpi. Fonts are usually designed with printer density in mind. What complicates the issue is that glyphs are also frequently described by curves (so-called vector definition) not by bitmaps (which do not scale well).

LO and other word processors compute the placement of glyphs at very high accuracy but then they must cope with the physical properties of the output device. This means characters will be aligned on the output device grid. A pixel is a pixel and you can't draw anything which is not an integral multiple of pixels.

When a glyph dot must be displayed, decision is made to put it in a device pixel bin: the nearest, the one which will contain most of the dot or some other determined by a sophisticated algorithm.

You mention that the same word is not painted the same in different positions. This is so because the fractional part of the word origin (in screen coordinates) is not the same in these locations and the algorithm chose a different "rounding".

What you see is a consequence of subsampling. The smaller the character size, the worse the aspect.

I do not think that LO is to blame. Usually, font display is an engine located in the OS. Consequently, all applications are affected, but it shows more in word processors because documents have many different fonts with different sizes whereas "technical" (or rather not document-oriented) applications use a single character size in a well-chosen font in order to be less sensitive to this phenomenon.

If this answer helps, please check the tick mark and, optionally, upvote it.

This may not be the correct answer but remember that the "dot density" is much smaller on screen than on printer.

The most frequent screen density is 96dpi while printers play with 300 or 600dpi. Fonts are usually designed with printer density in mind. What complicates the issue is that glyphs are also frequently described by curves (so-called vector definition) not by bitmaps (which do not scale well).

LO and other word processors compute the placement of glyphs at very high accuracy but then they must cope with the physical properties of the output device. This means characters will be aligned on the output device grid. A pixel is a pixel and you can't draw anything which is not an integral multiple of pixels.

When a glyph dot must be displayed, decision is made to put it in a device pixel bin: the nearest, the one which will contain most of the dot or some other determined by a sophisticated algorithm.

You mention that the same word is not painted the same in different positions. This is so because the fractional part of the word origin (in screen coordinates) is not the same in these locations and the algorithm chose a different "rounding".

What you see is a consequence of subsampling. The smaller the character size, the worse the aspect.

I do not think that LO is to blame. Usually, font display is an engine located in the OS. Consequently, all applications are affected, but it shows more in word processors because documents have many different fonts with different sizes whereas "technical" (or rather not document-oriented) applications use a single character size in a well-chosen font in order to be less sensitive to this phenomenon.

If this answer helps, please check the tick mark and, optionally, upvote it.

EDIT

Some font engines use a trick to provide "sub-pixel" positioning.

Considering that color screens are built with vertical color stripes (usually in order RGB), they send colored dots to address individual stripes. If you look at the screen from a sufficient distance so that cell colors blend, you get the desired effect.

There is also the "anti-aliasing" processing where characters are "blurred" with extra color pixels to deceive the eye and brain into imagining smaller pixels thanks to the human image recognition (we're expecting characters and match "inexact" shapes to glyphs).