[MPEG-OTSPEC] Acknowledging sources

Peter Constable pgcon6 at msn.com
Sat Aug 15 20:49:03 CEST 2020


On Sat, Aug 15, 2020 at 1:05 AM Simon Cozens <simon at simon-cozens.org<mailto:simon at simon-cozens.org>> wrote:
One thing which would at least mitigate some of these problems would for
the spec to be honest about how ideas were originated and included. The
features registry at least does this, so there's precedent.

In a truly community-owned spec this would be unnecessary, as there
would be an expectation that it was the consensus of multiple
contributors. But for a spec which is maintained by one corporate
entity, which claims ownership over the spec, to include contributions
from others without acknowledging the source of those contributions is
dishonest and disingenuous.

Those of us with long memories of "embrace and extend" have some
sensitivity for how MS has taken the ideas of others and berated them
publicly before implementing them and claiming them as its own - this
destroys trust and dissuades future contributions.

I’ve seen critiques of various companies recently, including my former employer, that I don’t think are really fair. In the OpenType spec, there are many things that originated as requirements from different companies.

If there is a perception that, e.g., the HVAR table was forced unilaterally by MS because of legacy GDI limitations, then that is simply not the case. Particularly in relation to extensions for variable fonts in OT 1.8, all of the extensions were formed by consensus between several companies including Apple, Adobe, Google, Microsoft, Monotype and others. Not everyone thought every part of the design was ideal, but all agreed that the design was workable and a good enough to drive progress across industry. And I think we can say it has succeeded.

The fact that OT includes CBDT, SVG, sbix tables as well as COLR is clearly not because MS only incorporated into OT things that it claimed as its own. There are certainly some parts of the OT spec that MS developed unilaterally: in particular, the OTL tables were developed solely by MS, at a time when other companies weren’t particularly working on the same issues. (Apple was the only other company working on similar issues at the time, but various factors kept the two companies from converging.) Is MS to be faulted because they designed OTL tables on their own? That seems completely unreasonable to me. Are there parts that MS developed on its own that could have been better if others were collaborating? Certainly. And especially in the light of 20+ years of experience using some older parts of the format, we collectively have better understanding of requirements that could inform better designs. Does that mean that the designs implemented in the past were bad or were a mistake? Hardly: we wouldn’t have learned what we have without them, and a lot of mileage has been gotten out of them in the meantime.

Is there legacy cruft that could be dispensed with? Absolutely. But IMO it’s not at all fair to say that MS or any other company is to be blamed for that cruft persisting. There are _a lot_ of moving parts involved, and a sufficient number of them need to be moved together to deprecate cruft or replace them with something new. MS isn’t to blame for, ‘post’ still being a “required” table, for name IDs 1 and 2 still being used, for legacy platform-specific language IDs in the name table, or any number of other things that are cruft. Rather, it’s broad industry inertia. A new name table (or glyf, GSUB... whatever) could be designed tomorrow. But font developers couldn’t make much use of it until it’s supported on a sufficient number of end points.

And industry inertia is a very real hurdle to overcome. It requires resources to make major changes, and  allocation of resources requires business justification. A new glyf table table supporting cubic beziers (for example) would be nice, but across industry in general I don’t think there’s currently a perception that there is a business problem it would solve or business opportunity that it would create.

The development of variable fonts was somewhat exceptional: enough of the major companies saw an opportunity to bring new value, a technical roadmap indicating feasibility at reasonable cost was laid out, and they quickly agreed to work together to converge on designs. That kind of thing doesn’t happen often or easily. Look at the four colour font formats: in a long-term perspective, converging on one format would have been much better. Or look at other sectors of industry, such as imaging or audio formats: I’m sure there’s at least one order of magnitude (and probably several) more investment that’s been made to work on designs, and yet content creators and users have to navigate a confusing array of different formats. (If anybody knows of a lossless audio format that works for both Sonos and iTunes, please little-r.)

There are plenty of innovations and improvements I’d earnestly like to see in font formats and text layout. But IMO the biggest problem impeding progress is collective industry inertia, not any one company. In the past two years, MS has particularly disappointed many of us (and certainly me, far more and for much longer than most people here probably know); but I think that’s only one symptom of a wider issue. Taking glyf v2 with cubic beziers or 32-bit glyph IDs as examples, these are ideas that have been around for several years, but I suspect there are not many companies that perceive these as things they should prioritize in resource allocation.



Peter
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.aau.at/pipermail/mpeg-otspec/attachments/20200815/97a853d4/attachment-0001.html>


More information about the mpeg-otspec mailing list