[MPEG-OTSPEC] Could software used for GBSUB decoding be adapted to decode localizable sentence codes please?
wjgo_10009 at btinternet.com
wjgo_10009 at btinternet.com
Fri Apr 3 19:58:54 CEST 2020
Could you consider the document linked here please?
http://www.users.globalnet.co.uk/~ngo/A_List_of_Code_Numbers_and_English_Localizations_for_use_in_Research_on_Communication_through_the_Language_Barrier_using_encoded_Localizable_Sentences.pdf
That document, a slide presentation and a description of the encoding
space used are available from the following web page.
http://www.users.globalnet.co.uk/~ngo/localizable_sentences_research.htm
The webspace is hosted on a server run by PlusNet Plc, an internet
service provider. The webspace is not hosted on my computer.
If the code numbers were each decoded using a GSUB table in a font to
display a glyph, then existing software in an OpenType supported
application would do the task. That is well-known.
Yet these codes would be decoded using a Unicode text file named
sentence.dat specific to the chosen target language. That is, there
would be a sentence.dat file for French, a sentence.dat file for German,
a sentence.dat file for Japanese, a sentence.dat file for Welsh, and so
on. The end user deciding which one to use.
Software would be needed to decode from the codes to localized text
using data from whichever version of the sentence.dat file is in use on
the receiving device, yet the process seems to be very similar in
concept to glyph substitution in the sense of recognising a sequence of
characters in an input text and replacing the sequence with something
else in the on-screen presentation.
How difficult would it be to adapt a copy of the original software to
this application please?
William Overington
Friday 3 April 2020
More information about the mpeg-otspec
mailing list