[MPEG-OTSPEC] [EXTERNAL] Shared GSUB/GPOS notes, was Re: dmap proposal

Peter Constable pconstable at microsoft.com
Wed Jan 3 22:06:04 CET 2024


Hi, Skef. Appreciate the dialog.

> Constructing a GSUB that will work correctly downstream from different cmaps is a strange thing to even attempt.

Why so? In a TTC, one table that will almost certainly be common is the glyf/CFF/etc. table with outline data. In that case, the glyph IDs are the same across all of the font resources. So, there can be a single lookup list; if any glyph IDs appear in a lookup coverage table but aren't mapped from cmap, then that data is ignored. So it seems like the only reason for needing separate GSUB or GPOS tables would be if the scripts, language systems or features needed to be different. Is that ever necessary?


> When there are further questions of style -- and the decision by Unicode to use a common subset of codepoints for different CJK languages and regions effectively classifies those distinctions as matters of style, broadly speaking ...

Reasonable from a Unicode perspective.

> -- one needs to treat the GID initially cmapped-to as "symbolic", in a sense, in order to support multiple such languages/regions in the same (sfnt) font.

So, you're saying the initial GID should be considered an exact abstract equivalent to the Unicode character - akin to Unicode's abstract notion of glyph<https://unicode.org/glossary/#glyph>? If so, I'm not sure why. I think of GIDs as font-internal implementation details, and nothing more.


> I think there are virtues to doing either at the OTL level because it will further minimize file size, which is the only plausible goal of dmap.

Indeed, that is the only goal of cmap.

So, if I now understand your line of reasoning, the main goal file size reduction, hence de-duplication of cmap _and also_ of GSUB/GPOS, and it's your contention that the sharing of GSUB/GPOS is only feasible if all font resources share a cmap (with no dmap).


> So it seems like the argument for dmap would therefore be that it fits better into existing toolchains

Or existing text display implementations.

And, I think, another factor to consider is the challenge of creating shared GSUB or GPOS tables if fonts have different cmaps (or shared cmap + different dmaps).



Peter

From: Skef Iterum <skef at skef.org>
Sent: Tuesday, January 2, 2024 6:58 PM
To: Peter Constable <pconstable at microsoft.com>; Ken Lunde <lunde at unicode.org>
Cc: mpeg-otspec at lists.aau.at
Subject: Re: [EXTERNAL] [MPEG-OTSPEC] Shared GSUB/GPOS notes, was Re: dmap proposal

You don't often get email from skef at skef.org<mailto:skef at skef.org>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>


On 1/2/24 13:12, Peter Constable wrote:
Happy 2024, all!

> My worry about using dmap for multi-language/region support...

First, let me repeat: multi-language/region support is not the only reason for creating TTCs / not the only motivation for a dmap table.

As for handling multi-language/region support...

If I understand correctly your line of reasoning, you're OK with the idea that distinct font resources (bundled in a TTC for de-duplication of data) can be used as the means given to the user for selecting language-/region-specific glyphs. But instead of cmap/dmap as the implementation mechanism to get different glyphs according to the font resource that is selected, you want to a mechanism that instead integrates with GSUB/GPOS.

Yes
Please note that dmap is really orthogonal to your line of reasoning. Ignore dmap for a moment: today, a TTC can bundle language-/region-specific font resources with distinct cmaps as the mechanism for selecting distinct glyphs.

So, your suggestion is that, instead of distinct cmaps, that we could provide a way for the fonts in a TTC to share a common cmap and instead have some distinct data that triggers different GSUB/GPOS actions in each font resulting in selecting language-/region-specific glyphs. Like the dmap proposal, this would avoid a lot of duplication in cmap data, and the size savings in each case would likely be comparable. The key difference between these is


  1.  Integration into initial character-to-glyph mapping

versus


  1.  Integration into OT Layout glyph actions that occur after initial character-to-glyph mapping (which is typically done in a shaping engine).

Now, I said earlier that TTCs can be created for reasons other than multi-language/region support. We could generalize the latter to cover any situation provided the mechanism doesn't require use of registered script/langsys tags.


I think that it's an overstatement to say that they're "orthogonal". The only reason I can see for adding dmap is to reduce file size -- otherwise you could just have separate maps. It's true the reasoning I've offered for a GSUB/GPOS mechanism is based in multi-language/region support, but as (I think) you imply here, it could be extended to other cases via the use of reserved or otherwise non-conforming langsys tags.

Generally speaking, distinct cmaps (whether achieved via separate tables or dmap) will either be upstream of distinct GSUB tables or (rarely) upstream of a font without a GSUB. (Constructing a GSUB that will work correctly downstream from different cmaps is a strange thing to even attempt.) So in practice, by enshrining dmap one more or gives up saving the duplicate GSUB space. A typical such GSUB may be smaller, in that more of the mapping will be sorted out in the dmap, but the work of any optional features will be duplicated. So how orthogonal the topics are depends in part on what degree of file size savings is desired or expected.
I'll digress for a moment to point out that this situation is not entirely unlike the need to handle Unicode variation sequences: given a triggering condition (presence of a VS character / user selection of a particular font resource) some characters need to be mapped to different glyphs. It was about 15 years ago that Adobe approached Microsoft to work on a solution. In that case, those of us at MS thought this could simply be handled in GSUB without needing to design any new table formats. But Adobe pushed strongly, and convinced us, to create a new table format, cmap subtable format 14. While I forget all of the details, part of their argument was that this should really be handled in the initial character-to-glyph mapping, not in OT Layout glyph processing that comes later.

I can't speak to past Adobe positions beyond observing that companies change their positions all the time. As far as this specific issue is concerned, the point that Adobe may have been making then and the point I'm making now aren't necessarily in conflict. cmap, and thus dmap, are creatures of Unicode mappings, and are accordingly a context for working out various complex issues of identity. In some cases whether a question relates to identity or to, say, "style" is a matter of opinion, and people may have strongly felt that for the format 14 stuff it was the former.

In any case, when there are questions of identity but no subsequent questions of style, one can just map in cmap and be done with it. When there are further questions of style -- and the decision by Unicode to use a common subset of codepoints for different CJK languages and regions effectively classifies those distinctions as matters of style, broadly speaking -- one needs to treat the GID initially cmapped-to as "symbolic", in a sense, in order to support multiple such languages/regions in the same (sfnt) font. Doing more of that work up front with dmap is more of a "you won't have to bother with all that" sort of thing, there's no big conceptual leap.
Returning to the main topic...

> Ideally we want fonts that support different scripts and languages via [OT Layout]...

Is your preference for OTL integration because you focused on TTCs for multi-language/region support and OTL already has script/langsys mechanisms? Or would you prefer OTL integration even for TTCs not providing multi-language/region support?

If the former, then (i) I'd counter that (again) multi-language/region support isn't the only reason for creating TTCs.

In either case, I'll ask: Why is OTL integration the ideal approach?



I think there are virtues to doing either at the OTL level because it will further minimize file size, which is the only plausible goal of dmap. So it seems like the argument for dmap would therefore be that it fits better into existing toolchains, which is probably true. If that's right then the question boils down the balance between how much reward we want from tool development vs how much we think we can ask of it.
For VSs, Adobe argued the other way around: relying on OTL was less preferable to initial character-to-glyph mapping. One argument against OTL integration involves character-palette UI: today that can be handled using cmap data alone. With OTL integration, there's all of the GSUB formats that need to be processed-effectively invoking shaping logic. And this applies not just to character-palette UIs: platform APIs that return the initial character-to-glyph mapping (which are independently needed) also need to route through code that, to now, has been designed for use only after initial character-to-glyph mapping.

It's mixing layers that, for some implementations, could be kept cleanly separated. For example, in the GDI text stack, the GetGlyphIndices()<https://learn.microsoft.com/en-us/windows/win32/api/wingdi/nf-wingdi-getglyphindicesw> implementation would read cmap data directly without calling into Uniscribe for OTL processing. Using OTL integration to de-dup cmap data in TTCs would require the GetGlyphIndices implementation to call into Uniscribe.

So, I remain unconvinced this is ideal and would like to understand more why you think it is.


> 4. However, after thinking about how GSUB tables are structured one realizes one can probably accomplish the same thing without any spec changes...

Except you haven't proposed any way to trigger a font-specific required glyph substitution as part of the initial character-to-glyph mapping, which it seems to me is _the essence of what is required_.

The mechanism I discussed is allowing the lookup table associated with DFLT/dflt to be distinct for each TTC entry. After thinking about it a bit more one might also want to influence the dflt entries for non-DFLT scripts to "match", but this is just a matter of a bit more "duplicated" material. This GSUB mechanism does raise some further questions about switching between language systems in TTCs built this way, but distinct cmaps raise similar questions. If a font can't support it it can just leave every TTC slot as, in effect, DFLT-dflt-only.

Skef

A simple way to do that could be a small, font-specific table that provides an index into the GSUB lookup list, with the constraint that the lookup must be a type 1(single substitution) lookup (else it will be ignored). But I'm still not sure this is ideal and preferable to a dmap table.


Peter Constable


From: mpeg-otspec <mpeg-otspec-bounces at lists.aau.at><mailto:mpeg-otspec-bounces at lists.aau.at> On Behalf Of Skef Iterum
Sent: Wednesday, December 27, 2023 4:19 AM
To: Ken Lunde <lunde at unicode.org><mailto:lunde at unicode.org>
Cc: mpeg-otspec at lists.aau.at<mailto:mpeg-otspec at lists.aau.at>
Subject: [EXTERNAL] [MPEG-OTSPEC] Shared GSUB/GPOS notes, was Re: dmap proposal

You don't often get email from skef at skef.org<mailto:skef at skef.org>. Learn why this is important<https://aka.ms/LearnAboutSenderIdentification>

Some preliminary notes on an idea I'm looking info, starting from this line of reasoning:

  1.  My worry about using dmap for multi-language/region support is that the solution is separate from the script/language GSUB/GPOS mechanism. Ideally we want fonts that support different scripts and languages via the latter, and doing so while starting with different initial cmaps is a lot of work and QA.
  2.  This leads to the idea of allowing a TTC slot to pick a script and language to serve as the default that I brought up last week, but that will face the objection that it violates the current semantic properties of the TTC font collection spec: TTC currently works only at the SFNT level, not below.
  3.  That leads naturally to the prospect of "DSUB": the metaphorical equivalent of dmap but at the GSUB level. All this would need to do is specify a "pseudo-default" for GSUB: act like this script and this language were the defaults so they're used unless one is specified.
  4.  However, after thinking about how GSUB tables are structured one realizes one can probably accomplish the same thing without any spec changes.

Why? In GSUB and GPOS all offset fields below the header are "hierarchical" -- each is relative to the start of the subtable it appears in. And the header is basically a short list of offsets (relative to the start of the table). Together this means that one should be able to do the following with (e.g.) GSUB:

  1.  Move the table, and those below it, a bit further down in the font file
  2.  Add a new GSUB header above the existing one pointing to (literally) the same featureList, lookupList and, if relevant, featureVariations tables. (Each offset being increased by the difference in start of the two headers.
  3.  Add a new top-level ScriptList table below the new header, with all offsets adjusted similarly except those for DFLT, which points to a new Script table below it.
  4.  Add the new ScriptList table, with all offsets adjusted similarly except those for defaultLangSys, which is adjusted to point to the LangSys table of a different language (within the original GSUB table).

Now, with a very modest amount of added memory, you have two GSUB tables -- one with the original mapping for DFLT dflt and one with a new mapping for those. The latter will include some "junk" bytes (the former's header, ScriptList and DFLT Script tables) but nothing in it will make any use of those areas. (I haven't yet tested whether ots and such will complain about that.) And you can do this for more languages by adding more such table combinations, limited only by the Offset16 fields in the header. (One could, of course, repeat the whole GSUB table to buy more overlapping table sets if needed.)

With similar modifications to GPOS (when needed), I think all that one needs to build out the language-specific TTC slots is:

  1.  Per-slot head tables (to get the checksums right -- would be nice if this wasn't required)
  2.  Per-slot name tables (although for this one could add the font-specific name strings to the end of the string data and do something similar to GSUB with the NameRecord array, sharing the string storage)

All other tables, including cmap, would be shared in the normal way.

To be clear, unless a picky client-side validator barfs on these conventions I suspect one could build a cross-language TTC font collection in this way today, minimizing the memory cost of the additional slots. I'm currently poking at constructing an example to make this more concrete, but, of course, existing tools aren't designed with this sort of thing in mind. I'll send another note if and when I make progress.

Skef
On 12/21/23 23:58, Skef Iterum wrote:

I stand dystopianed.

However, to not yet give up entirely on this line of thinking ...

What is on the table in these messages is a further extension of an existing table, in this case cmap. Which at least suggests that the problem here isn't "system-level" support -- we think we can get those changes. What you describe is, loosely speaking, "application level" support -- allowing the context that the user interacts with to specify the needed parameters, and then educating the user to do so.

I agree that's hopeless for the foreseeable future.

These dmap ideas do have the benefit of being somewhat general (although one might worry about unusual cases). Maybe other compelling use cases, or just the value of generality itself, justify such an extension. Still, if the fundamental problems are what you describe, we might also consider addressing them directly and specifically. Instead of extending cmap, and building region- or language-specific fonts via a separate mechanism, we should at least consider extending TTC to associate a named subfont with the missing parameters. Basically: "render this set of tables using this script and this language by default". Done a bit subtly, one could just ship every cross-language font file with a "base" font with just the name, and some entries for other scripts and language, suitably named, and otherwise sharing TTC data-structures.

>From the perspective of the font engineer that seems more productive than building a cross-language font with one set of mechanisms and then building multiple data-sharing individual language fonts using a different mechanism (assuming we still want engineers to do the former).

Skef
On 12/21/23 18:15, Ken Lunde wrote:

Skef,



I might be the only one in this discussion who clearly remembers that Version 1.000 of Source Han Sans and Noto Sans CJK, which were released on 2014-07-15, *was* utopian in that the fonts with the full set of 64K glyphs, meaning genuine Pan-CJK, expected that language tagging would be used to access the desired non-default region-specific glyphs, with the default glyphs being for Japan. Reality quickly taught us that expecting language tagging alone to solve this was completely unrealistic for the following three reasons:



1) The app must support language tagging

2) The app must support language tagging for the appropriate East Asian languages, which is now up to five for these Pan-CJK fonts

3) Assuming #1 and #2 work, the user must then language-tag the text



Going on 10 years later, not much has changed for #1 and #2.



Modern browsers supported the 'locl' GSUB feature way back in 2014, but support in authoring apps is still severely lacking today.



I use Adobe InDesign to get full language-tagging support for these fonts, which is still about the only game in town. Adobe Illustrator silently added East Asian language-tagging in the 2018 release (in 2017), but it was a "close but no cigar" outcome in that they added only "Chinese" (that turned out to be Traditional Chinese for Taiwan) and Japanese, and despite filing bugs over five years ago, Adobe Illustrator 2024 (in 2023) is still unchanged in this regard. What makes the current support even less useful for mainstream users, ignoring that three of the five East Asian regions are not supported at all, is that the two supported East Asian regions are visible only when creating Character or Paragraph styles. They are not shown in the list of languages in the Character or Properties panels. Adobe Photoshop 2024 (in 2023) still does not support language tagging for East Asian languages.



Getting back to Source Han Sans and Noto Sans CJK, Version 1.001 was released on 2014-09-12, which added separate 64K-glyph fonts for each of the four (at the time) supported East Asian regions. The 'locl' feature is still included for the benefit of those environments that support language tagging. All five regions were not supported until Version 2.000, which was released on 2018-11-19, which meant five separate sets of 64K-glyph fonts. The fifth region, of course, was Hong Kong SAR.



In other words, we are quite far from Utopia, and we are unlikely to arrive there anytime soon.



Regards...



-- Ken



On Dec 21, 2023, at 17:04, Skef Iterum <skef at skef.org><mailto:skef at skef.org> wrote:



More stuff after hitting send too fast:

I can see a set of arguments against trying to deal with these regional problems within a single mega-font grounded one way or another in GIDs being a limited resource. But we've already decided to overcome that problem. So, for example, if we need to spend a GID to, in effect, abstractly represent a given codepoint to bridge from cmap into the shaping tables, we have GIDs to spend now. (And, as implied in my other messages today, wouldn't necessarily have to pay the typical file overhead for them.)

As I understand it that's how regional variations in, e.g., Cyrillic are handled now. So I guess, other than the large number of glyphs in CJK fonts I'm not understanding what requirements are pushing the solution in such a different (and seemingly ad hoc) direction.

Skef

On 12/21/23 16:49, Skef Iterum wrote:

Maybe I'm being utopian but I can't help thinking that either there's some token ("dialect"?) that Unicode should be tracking and formalizing but isn't, or Unicode is doing that and we haven't tilted the font specifications enough in its direction to use it. There's already all of that script and language infrastructure there that is meant for this flavor of need, and it seems like a much better place to be solving these problems than rapping stuff up in a TTC and having the client side pick out the sub-font by name or whatever.

Skef

On 12/21/23 15:00, Peter Constable wrote:

During the recent AHG meeting, I mentioned that Apple, Adobe and Microsoft, some years ago, had started discussing a 'dmap' (delta character map) table proposal. This was in late fall of 2016; the focus was on pan-CJK fonts, and in that timeframe Ken Lunde has submitted a proposal to UTC (L2/16-063 Proposal to accept the submission to register the "PanCJKV" IVD collection) to define variation sequences for ideographs that designated a range of variation selector characters to correspond to several regions for which regional glyph variants of CJK ideographs might need to be supported. I managed to find an archive of some emails from discussions at the time, so can summarize:

 The aim was to be able to support distinct fonts for regional CJK variants without duplication of data. A TTC could allow de-duplication of glyph data, but there would be other duplication. We agreed the biggest concern was with 'cmap' data: If any one of the regional variant fonts in the collection were taken as a point of reference, then any of the other regional variants would have many of the same mappings (perhaps most), though not all the same mappings. But there wasn't any existing means to share common mappings across fonts while there were also some different mappings. Dwane Robinson suggested that we define a new 'dmap' table that uses 'cmap' formats but is just used to describe the differences in mappings from a common 'cmap'.  We agreed that a 'dmap' table doesn't need the duplication of different platforms/encodings, and that we can converge on only one platform/encoding (hence, no encoding records are necessary). We discussed format 4 versus 12, and agreed to allow either, but that both are never required. Now, we had teleconfs between Apple and MS, but the emails I found indicate that Behdad was also kept informed: one of the emails records that Behdad requested that format 13 also be allowed.

 We hadn't settled, however, on what to do about format 14 subtables. It wasn't a priority for Apple at the time, but it seemed like it would be incomplete if we ignored it. Knowing that Ken Lunde was dealing a lot with VSes and also working on pan CJK Source Han Sans CJK, we brought Adobe into our discussion at that point.

 The issue with format 14 is that it divides variation sequences into two groups: (i) VSes that map to the same glyph already mapped in a format 4 or 12 subtable (DefaultUVS), and (ii) VSes that map to a different glyph. Certainly the default mappings would be different in the various regional variant fonts, and some of the non-default mappings could also be different. (Even if a given VS never mapped to different glyphs in the different fonts, the fonts could still differ in what VSes they need to support.) So it's necessary to resolve how a dmap/14 subtable should interacts with a cmap/4 (or cmap/12) subtable, with a cmap/14 subtable, with a dmap/4 (or dmap/12) subtable, and with a dmap/14 subtable. One possible approach would be that the dmap/14 subtable completely supersedes the cmap/14 subtable (i.e., the latter is not used at all, and there is no de-duplication of that data). Another approach could be that a dmap/14 subtable complements the cmap/14 subtable by providing select replacement mappings (a delta-though there are still further details about how that would work exactly).

 There were some useful points brought up along the way:

    * Ned Holbrook pointed out that the format 14 DefaultUVS subtable is just a space-saving variant of the NonDevaultUVS subtable. A font doesn't need to have any DefaultUVS table: the same sequences could be handled in NonDefaultUVS subtables - less efficiently... _in a single font_.

    * For CJK, Ken Lunde pointed out that there are two kinds of UVSes to consider:

        * "Standardized" VSs: these are defined in the Unicode Standard (see unicode.org/Public/UCD/latest/ucd/StandardizedVariants.txt) for CJK Compatibility Ideographs. They are defined in Unicode in a region-independent manner, but most represent region-specific glyphs.

        * "Ideographic" VSes: these are VSes registered in the Ideographic Variation Database (Ideographic Variation Database (unicode.org)) in region-specific collections.

Because of the nature of each type, Ken thought there might be limited sharing across fonts. (E.g., at least some font developers would want to support a given IVS collection only in the one regional font for the corresponding region.) He did identify cases, however, in which the same SVS would need to map to different glyphs in different fonts.

    * Again, for CJK, there would be cases in which different fonts would need to support the same VSes, but they would differ wrt DefaultUVS vs. NonDefaultUVS mappings.

 Ken also called out some other uses in email exchanges. It all suggested that an ideal solution would make it possible to construct a collection file in which  - two or more fonts can share some UVS mapping data while also having some font-specific mapping data; and

- it's also possible to have other fonts that do not share any UVS mapping data with other fonts.

 That would allow the fonts to support only UVSs that are relevant for their respective markets, while also having an efficiency benefit from data-sharing between certain of the fonts.

 That was in December 2016. We ran into end-of-year holidays and never resumed to closed on an approach that optimizes size of VS mapping data.  The following is the last draft proposal that we exchanged.  --

dmap - Character to Glyph Index Differences Table

 This table is an optional adjunct to the 'cmap' table defining differences from the nominal mappings in order to increase sharing of the 'cmap' itself across fonts in a TTC.

 If a font production tool determines that the 'cmap' tables across the fonts in a TTC are largely but not entirely identical, it can choose one font to be used as the basis for the others in terms of character to glyph index mapping, expressing the mappings of the other fonts using only the mappings that are different from those of the former font. An example would be a CJK font family with region-specific fonts, where most characters would map to the same glyph index.

 The 'dmap' table

 Type Name Description

UInt16 version Set to 0.

UInt16 numTables Number of offset fields to follow.

UInt32 offset[numTables] Array of byte offsets from beginning of table to cmap subtables. All subtables are assumed to use Unicode. There can be at most one subtable of either format 4, 12, or 13.

 As in the 'cmap' table, each 'dmap' subtable shall have the same structure as in 'cmap', starting with a format field that determines the remainder. The language field for a format 4, 12, or 13 subtable must be set to zero.

 The steps for determining the glyph index for a given UVS consisting of a base character and optional variation selector are as follows:



    * Apply the Unicode 'cmap' subtable to the base character to get the nominal glyph index.

    * If the font has a 'dmap' format 4 or 12 subtable that maps the base character to a non-zero glyph index, it will replace the nominal glyph index.

    * If the 'cmap' has a format 14 subtable, apply it in this way:

3.1.If the Default UVS Table contains the base character, the final glyph index will the be one determined by the 'cmap'.

3.2.Else if the Non-Default UVS Table contains the base character, it will determine the final glyph index.

3.3.Else the final glyph index will remain as it was after step 2.

 Note: An earlier draft of this document allowed for a second subtable of format 14, which would allow redefinition of variation sequences. Owing to uncertainty about usefulness and the exact behavior of the Default UVS Table, however, it has been removed pending further discussion.

 -

 In the previous draft, a different set of steps for handling UVSes were considered:

 -

The steps for determining the glyph index for a given UVS consisting of a base character and optional variation selector are as follows:

 1. Apply the 'cmap' to the base character to get the nominal glyph index.

2. If the font has a 'dmap' format 4 or 12 subtable that maps the base character to a non-zero glyph index, it will replace the nominal glyph index.

3. If the 'dmap' has a format 14 subtable, it will be used in place of the one in the 'cmap'.

4. If there is a format 14 subtable, apply it in this way:

4.1.If the Default UVS Table contains the base character, the final glyph index will the be one determined by the 'cmap'.

4.2.Else if the Non-Default UVS Table contains the base character, it will determine the final glyph index.

4.3.Else the final glyph index will remain as it was after step 2.

 -

  Peter



_______________________________________________

mpeg-otspec mailing list

mpeg-otspec at lists.aau.at<mailto:mpeg-otspec at lists.aau.at>

https://lists.aau.at/mailman/listinfo/mpeg-otspec



_______________________________________________

mpeg-otspec mailing list

mpeg-otspec at lists.aau.at<mailto:mpeg-otspec at lists.aau.at>

https://lists.aau.at/mailman/listinfo/mpeg-otspec
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.aau.at/pipermail/mpeg-otspec/attachments/20240103/15c31a95/attachment-0001.html>


More information about the mpeg-otspec mailing list