Questions

This page answers some common questions about the Musa Alphabet.

Artificial (or Constructed) languages have never succeeded

The Musa Alphabet is not a language ; it is a way to write languages. Many constructed alphabets have succeeded brilliantly, notably Hangul (for writing Korean), the Cyrillic alphabet (used to write Russian and many other languages in north Asia), and Canadian Syllabics (used to write Cree and Inuktitut). To some degree, all writing systems are artificial, and the main distinction between those we consider "constructed" and those we consider "evolved" is the degree of development since the original invention.

But there are also some technological factors that make an endeavor like Musa more likely to succeed than ever before. We no longer need to produce and distribute mechanical keyboards - instead, anybody can download one of several Musa keyboards for free and start using it immediately. We no longer need to produce and distribute metal type for printing - instead, anybody can download one (or more) of several Musa fonts for free and start using it immediately. We no longer have to convert a huge library of material written in the traditional orthography - instead, anybody can run the text in digital form through a transcriber and see it in Musa immediately, or vice versa. We no longer have to print and distribute educational material - instead, anybody can download it or simply use it online. It has become much easier to change alphabets!

Orthography is so closely linked to each language that a universal alphabet is impossible

Few of the world's languages are written using an alphabet that was developed for them - almost all of the world's writing is done with symbols that have been adapted from some other language. Even the exceptions prove this rule : for example, all of the Chinese languages, including zhōngwén, the national language of the People's Republic of China, are written using characters developed for Old Chinese, a very different language.

When people first tried to write English using the Roman alphabet (itself adapted from the Greek alphabet, which was in turn adapted from the Phoenician alphabet, and so on), they lacked letters for many English sounds. For some of these missing letters, they made distinctions between variants, like separating j from i and v and w from u. For others, they adapted letters from Greek (k y z) or the Futhark, the runic predecessor alphabet : these letters include the wynn Ƿ ƿ, the thorn Þ þ, the eth Ð ð, and the yogh Ȝ ȝ, which we English-writers (but not the Icelanders) later lost again. For other missing sounds, especially vowels (of which English has many more than the five in the Roman alphabet), they used digraphs, diacritics and rules. The end result, far from being closely adapted to English, is a crazy patchwork of jury-rigged quick fixes.

If we write in Musa, would everybody spell words according to their own, individual pronunciation?

No, we would all spell to a standard, just as we do now. For example, there is a dialect of English called General American, and most Americans would spell to a standard for that dialect. But if people in Boston want to leave the r's out of their non-rhotic dialect, or if people from Texas want to spell as they speak, they can spell to a regional standard. The British, on the other hand, might spell out their current standard dialect (local, regional, or national), so that Brits and Yanks might spell words differently, as they do now : colour versus color, lorry versus truck, and even bath versus bath.

The establishment of these standards is the job of academies like the Academie Française or the Real Academia Española. For languages without academies, like English, we rely on lexicographers to publish reference dictionaries with standard spellings.

Doesn't everybody already use the English alphabet?

Well, no. Only about one quarter of the world writes in a Roman alphabet. Another quarter uses other alphabets, including the widespread Arabic and Cyrillic alphabets and many used for only one or two languages, like Greek, Hebrew, Armenian, Georgian, Coptic and Ge'ez. Another quarter uses Brahmic scripts, and another quarter uses scripts based on Chinese characters. Several others, like Korean, use other scripts, and a few, like Japanese, use hybrid writing systems which mix several scripts.

We could imagine extending the Roman alphabet to provide letters for the same set of sounds that Musa now offers, and in fact the IPA (see below) does that. But one of the problems is that the Roman letters don't even stand for the same sound in the languages that now use them. For example, the letter j represents an affricate dj in English, a sibilant zh in French, a semivowel y in German, and a fricative kh in Spanish. The sound written sh in English is spelled ch in French and Portuguese, x in Catalan and Portuguese, sch in German, sci in Italian, s in Hungarian, ś in Polish, š in Czech, Slovak, Slovene, Croatian, Latvian and Lithuanian, kj in Swedish, and ti in English words like nation! The situation with vowels is even worse, since the Roman alphabet doesn't have very many.

By the way, I'll use the terms alphabet to refer to all writing systems or scripts, Latin alphabet to refer to the 23-letter alphabet used to write the Latin language, English alphabet to refer to the 26-letter alphabet which adds j v w, and Roman alphabet to refer to the 1350-letter extended Latin alphabet included in Unicode. Transcribing a script into the Roman alphabet is called romanization.

Why don't we just write using the International Phonetic Alphabet?

Good question. The IPA, as it's known, is a script for recording phonetic values, itself adapted from the Roman alphabet. Here are the first ten languages from the Home page, written in IPA :

 'ɪŋglɪʃ ͡ʈʂɤŋ˥wən˧˥  'ɦɪndiː·'ʊrdu  espa'ɲol  fʁɑ̃sɛ  al ˁara'bijja  'baŋla  'ruskij  portu'ges  me'laju

Even though the IPA and Musa are both phonetic, the IPA was not designed to be used as a primary orthography. A universal script could be developed based on a broad IPA transcription, but it would share all the flaws of romanizations.

Writing the world's languages in romanization or IPA is an idea in the same spirit as Musa. Musa is just a better choice: it's featural and iconic. Even though Musa has about twice as many letters as the IPA, the Musa letters are combinations of only 26 shapes. This makes keyboards simpler and gives you a strong hint about the pronunciation of a letter you don't recognize.

Here is a page which discusses this question in more depth.

Is Musa in Unicode?

Musa is compatible with Unicode, using the Private Use Area, which was intended for uses like this. So you can freely mix Musa text with Unicode text, without any problems.

For now, we're not interested in seeing Musa in Unicode. For one thing, Musa is still evolving, and Unicode is intended to prevent change, not accommodate it.

Unicode is a great idea - a single standard encoding for everything. And it accomplishes that goal ... with some flaws. For example, Nigerian Yoruba uses four letters that have both a subdot and an accent: é̩ è̩ ó̩ ò̩. The subdot is usually printed as a small vertical line so that it's not covered by underlining, and Unicode has a combining diacritic for it, but no precomposed letters for e̩ o̩ (or ), much less for the four accented combinations. There are precomposed letters for ẹ ọ ṣ with a simple underdot, and they're sometimes used, but they don't have precomposed accented versions, either. No problem: the accented versions can be composed using combining diacritics ... in four different ways! The result is that Yoruba terms or names with these letters cannot be reliably searched: searches may miss results with different encodings. Here's a similar example from Vietnamese, showing five different encodings for the same letter:

ặ ặ ặ ặ
1EB7 0103 0323 1EA1 0306 0041 0323 0306 0041 0306 0323

Unicode has a solution for this problem: the notion of canonical equivalence and the use of normalized forms for searches. But in practice, few search engines are equipped to implement this solution. In fact, numerous word processors are not even equipped to handle combining diacritics. This deficiency is hidden from speakers of well-supported languages like French or German by the precomposed letters, but not Yoruba, even though it is the most-spoken native language of the largest language family in the world! Meanwhile, the precomposed letter for Ù, used rarely in Italian and in one word in French, is on the first page of Unicode.

Unicode defenders can say that this is an implementation problem, not a problem with the encoding per se. But a cleaner encoding wouldn't have that implementation problem. Behind the issue is a much deeper issue of "what is a letter?", which was not adequately addressed as Unicode was being designed. Part of the problem was the decision to incorporate legacy encodings that, by and large, were thrown together with no design, by different groups in different eras. Even today, two different legacy encodings for Chinese are more popular than Unicode.

Unicode also suffers from poor design choices, like wasting 1/6th of all the BMP codepoints on Hangul syllables when Hangul is an alphabet. Or like maintaining the legacy order of Thai vowels - written order, not spoken order. Defenders should point out that the benefit is so great as to outweigh these small quibbles, and they're right! Citizens of the old DDR used to be happy to have a Trabant when the alternative was no car at all, but they abandoned them as soon as they had an alternative. So let's just think of the current Unicode as the "Trabant of encodings" - better than nothing - but Musa will wait until the better alternative comes along.

Changing alphabets is too much trouble. Why don't we just fix English spelling without changing alphabets?

People have been trying to fix English spelling for centuries, with almost no success (Noah Webster got us to write honor center defense). There are several pages on this site which discuss the problems with the English or Latin alphabets in more detail (see below). Here, let me make a different point.

The idea of changing English spelling doesn't appeal to anybody. People who have already learned our crazy spelling don't want to throw away those years of education, nor do they want to lose all their books. Meanwhile, the people who would most benefit from a change - children (born and unborn) and foreigners who have yet to learn to read - would prefer a better alphabet, like Musa, not just a patched-up version of an alphabet whose only justification is historic. And in fact both groups would prefer the solution that Musa offers: that we introduce Musa but continue using the current alphabet in parallel for a transition period. That satisfies everyone! And it's much easier to use Musa in parallel than a reformed version of the current alphabet.

This site has four pages which discuss this question in more depth. This one talks about why we need a new alphabet, while this one, this one, and this one show you some of the problems with the lazy approach.

Who needs Musa, and why?

Written language is the key to most of humanity's knowledge, and the recent advent of media that can handle images and sound hasn't changed that. So anything we can do to make reading and writing easier would offer big benefits.

For example, Japanese has probably the worst writing system in the world, with 4 different scripts : kanji, katakana, hiragana and romaji. In addition, most kanji have multiple readings, some of which are Sino-Japanese onyomi, while others are native Japanese kunyomi, so that 日 can be read hi, ka, hibi, nichi or jitsu, for example. There are many more little oddities that must be learned and deciphered when encountered, including nanori, gairaigo, okurigana, furigana, dakuten, yo-on, soku-on, cho-onpu, odoriji, three different systems of romaji and two different notations for numbers. And the whole thing can be written horizontally or vertically!

The Japanese estimate that their students must spend an additional two years of school in relation to American children to acquire the same proficiency in their written language. And yet, the Japanese are among the world's most educated people. Imagine how much more the Japanese could do with two more years of education! There have been many attempts to reform Japanese, but none have had much success. And the English Spelling Society claims that replacing traditional English orthography would save another three years.

Japanese is an extreme example, but almost every writing system has serious flaws :

Do you really think that everybody is going to give up their current scripts, in which so much education has already been invested and in which so much material is already written, for some utopian idea?

Most of what has ever been written was written in the last 30 years, and that is going to continue to be true unless some catastrophe overtakes us. We are still in a very early phase in the history of writing, and now is the time to fix the problems we already know about. Many languages are now changing scripts or adding an auxiliary script, and Musa has many advantages :


< Home A Quick Look at Musa >


© 2002-2024 The Musa Academy musa@musa.bet 02nov23