The language school I attended all but banned romanization. The idea was to learn, practice, and finally internalize kana and kanji as quickly as possible. Hepburn is just a band-aid when it comes to language study.
For people not interested in learning Japanese, however, a unified romanization could have its benefits. It just never struck me as particularly inconsistent to begin with, even after so many years living there.
Curiously enough, Hepburn romanization fixes some ambiguities in Japanese (Japanese written in kana alone) while introducing others.
The ō in Hepburn could correspond to おう or おお or オー. That's an ambiguity.
Where does Hepburn disambiguate?
In Japanese, an E column kana followed by I sometimes makes a long E, like in 先生 (sen + sei -> sensē). The "SEI" is one unit. But in other situations it does not, like in a compound word ending in the E kana, where the second word starts with I. For instance 酒色 (sake + iro -> sakeiro, not sakēro).
Hepburn distinguishes these; the hiragana spelling does not!
This is one of the issues that makes it very hard to read Japanese that is written with hiragana only, rather than kanji. No word breaks and not knowing whether せい is supposed to be sē or sei.
There are curiosities like karaage which is "kara" (crust) + "age" (fried thing). A lot of the time it is pronounced as karāge, because of the way RA and A come together. Other times you hear a kind of flutter in it which articulates two A's.
I have no idea which romanization to use. Flip a coin?
Hepburn is poorly supported in some input methods, like on Windows. If you want to type kōen or whatever, you really have to work for that ō. It's better now on mobile devices and MacOS (what I'm using now): I just long-pressed o and picked ō from a pop-up.
That's one aspect I really love about macOS. I'm from a small country so nearly no one makes hardware with our exact layout, but with macOS I can always just long press to fill in the gaps. I just wish all apps used native inputs, not some weird half-baked solution they built themselves.
What's the best way to type Japanese on Windows? (I have a QWERTY keyboard)
On mobile I just switch to the hiragana keyboard, but that obviously isn't a sane option on desktop unless I'm clicking all the characters with a mouse?
Using the example from the top-level comment, you would install an IME, switch to hiragana mode, start typing "kouen" and convert to kanji when you see the right suggestion.
It might sound complicated at first, but you can do it pretty fast once you get used to it.
I don't know now, but for the longest time, Google made a much better Japanese IME for Windows than Microsoft ("Google Japanese Input"). I started using it when running into reliability issues, like disappearing kanji dictionary, or frozen switching between roman and hiragana.
Assuming Microsoft's Japanese IME is still a dumpster fire, and the Google one has not succumbed to Googleshitification, that would be a way to go.
To enable the Microsoft IME there are some rituals to go through like adding the Japanese language and then a Japanese keyboard under that. It will download some materials, like fonts and dictionaries. A reboot is typically not required, I think, unless you make Japanese the primary language.
Once you have the keyboard, LeftShift + LeftAlt chord goes among the input methods. Ctrl + CapsLock toggles hiragana/romaji input. I think these are the same for Google or MS input.
The article says the new style says that you can use either a macron or a doubled letter, but it's not clear if that's supported for keyboard input on various platforms.
But in the case of ō, you can only use a doubled letter if the underlying word is おお. If it is おう then you don't have a doubled letter you can use; you need "ou" and that's not Hepburn any more. It is "wāpuro rōmaji" (word processor romaji).
Note: bitwize is talking about how to do it on Linux. Which is the best way in my biased opinion. Perhaps not the best mapping for people who use it regularly but is awesome for those who use it irregularly. We can usually guess how to do weird diacritics without having to look it up.
The old official system arguably makes more sense from a Japanese perspective.
If you look at the kana, the Japanese syllabic writing system, they have this ordering: ka ki ku ke ko, sa shi su se so, ta chitsu te to, etc. If you follow the regularity where there should be a "ti" sound there is no "ti" sound and it happens to be pronounced "chi".
One common analysis holds that the underlying phonemes really are: ta ti tu te to. Traditional Japanese grammarians usually analyzed it this way. And they were historically pronounced that way: it has arisen out of relatively recent sound change. Somewhat like how some British speakers pronounce "Tuesday" such that it sounds much like "Chews-day" to speakers of other dialects. Affrication in a fixed context. The t phoneme triggers that kind of affrication obligatorily in Japanese, before the i vowel or y glide.
Some disagree with this as overly theoretic and based excessively on historical linguistics, and they insist that sh and f and ch are distinct phonemes in Japanese. But the Japanese writing system itself treats it as if they were not.
If you are learning Japanese it makes sense to pick a system that reflects the internal logic of kana spelling. If you want to just approximately pronounce Japanese words in English then you want something that reflects the logic of English spelling.
These two goals are always in tension. Mandarin pinyin, for example, was designed to reflect the logic of Mandarin phonology in a consistent way. It's not meant to be easily pronounceable by English speakers. It's to enable Mandarin speakers to look up words in a dictionary or for students of the language to study Mandarin. Though it has ended up used as a pronunciation guide for English speakers. And that often doesn't go well; a lot of English speakers don't know what to do with the q's and x's.
For pinyin representation of Mandarin, these are very different sounds, while the equivalent (identical) Mandarin pinyin representation of し, じ, つ would be xi, ji, cu. I'm not as familiar with romanization systems closer to Latin pronunciations, but for Wade Giles it would probably be written like shi, chi, tsu.
You mean, if you would apply the inverse of the standard romanization of Mandarin, the resulting sound would be closer to the Japanese sound, if starting from the Kunrei spelling than if starting from the Hepburn spelling?
> It sounds way closer to the spoken sounds, at least to my western ears.
That's the thing... to some other non-English language speakers, the existing/old romanization method actually is more accurate regarding how the letters would be pronounced to them, especially coming from languages that don't have the same e.g. [ch] or [ts] sounds as written with Hepburn.
The one technical downside I would say to this change is, 1:1 machine transliteration is no longer possible with Hepburn.
I don't know the details history of the system's development, however I notice that with Kunrei everything spelling is neatly 2 characters while with Hepburn it may be 2 or 3 characters:
Kunrei: ki si ti ni hi mi
Hepburn: ki shi chi ni hi mi
The politics of the issue is obviously that Hepburn is older and an American system while Nihon and Kunrei are very purposely domestic (Nihon "is much more regular than Hepburn romanization, and unlike Hepburn's system, it makes no effort to make itself easier to pronounce for English-speakers" [1]). Apparently, Hepburn was later imposed by US occupying forces in 1945.
Perhaps 80 years is long enough and suitable to effect the change officially with no loss of face.
"Better" depends on what you care about. _konniti-wa_ (which is the Kunrei-siki romanization of こんにちは, _konniti-ha_ is Nihon-shiki form that preserves the irregular use of は as topic-marking /wa/) and _susi-o_ (again, Kunrei-siki ignores a native script orthographic irregularity and romanizes を as _o_ not _wo_ ) are more consistent with the native phonological system of Japanese. In Japanese coronal consonants like /t/ and /s/ are regularly palatalized to /tS/ and /S/ before the vowel /i/, and there's no reason to treat _chi_ and _ti_ as meaningfully different sequences of sounds. Linguists writing about Japanese phonology use it instead of Hepburn for good reason.
Obviously, being more transparent to English-readers is also a reasonable goal a romanization system might have, and if that's your goal the Hepburn is a better system. I don't have a strong opinion about which system the Japanese government should treat as official, and realistically neither one is going to go away. But it's simply not the case that Hepburn is a better romanization scheme for every purpose.
If French didn't use the Roman alphabet natively, you might have a point.
At some point you might as well use Roman characters the way the Cherokee alphabet does - which is to say, uses some of the shapes without paying attention to what sounds they made in English.
And the way English generally uses the Roman alphabet (obviously excluding the zillions of irregularities) isn't that far off from how most European languages use the Roman alphabet.
I'd expect that Spanish, German and French speakers would benefit just as much as English speakers from these changes.
> And the way English generally uses the Roman alphabet (obviously excluding the zillions of irregularities) isn't that far off from how most European languages use the Roman alphabet.
Its not far off from the union of how all other European languages use the Roman alphabet, would be closer to accurate.
Sure, but the point is this isn't really making romanized Japanese more English-like. It's making it more similar to how just about every other language already uses the Roman alphabet. This isn't an Anglo-centric thing, it's just good common sense - unless your goal is to make it harder to pronounce your language properly, which seems like an obvious own-goal.
I live in Thailand and I cannot get over the fact that romanization is (seemingly?) completely unstandardized. Even government signage uses different English spelling of Thai words.
You should have seen Taiwan in the 1990s. It was a hot mess of older Western romanization systems, historical and dialectical exceptions, competing Taiwanese and pro-China sensibilities, a widely used international standard (pinyin), and lots of confusion in official and private circles about the proper way to write names and locations using the Latin alphabet. In 1998, Taipei even made up its own Romanization system for street names.
The chart halfway down this blog post lays out some of the challenges once a standard was instituted about 18 years ago:
Oh there are plenty of standards, including an official one. The problem is nobody uses them. Thai writing is weird, and between the tones and the character classes and silent letters might as well just make some shit up. My birth certificate, drivers license, and work permit all had different spellings of my name on them.
IIRC, the road signs for “Henri Dunant Road” were spelled differently on either end, which was ironic, because at least that did have a canonical Latin form.
For people not familiar with Japanese, finding any info about a Japanese-language game can be a pain. They may have a Japanese representation, an official romanized name, a community romanized name using a different system… plus may also go by an outright English-language name, in some circles, which may (or may not) overlap with the name of an English-language port (if it exists). Then consider that some games have pretty extreme and confusing name variants in various editions or on different platforms, and those may go by different names in different contexts.
You can see the same game go by three different names on a community forum, Wikipedia, and a catalogue of games + md5sums for a system (you might think the md5sum could act as a Rosetta Stone here… but less so than you’d think, especially in the specific context of an English speaker and Japanese games, as you sometimes need some specific, old, oddball and slightly-broken dump of a game to get the one a particular English patch requires… and god knows what name you’ll find that under, but probably not the same md5sum as a clean dump)
The only bright spot in this is that if you can find a Japanese game on Wikipedia the very first superscript-citation almost always lists the official Japanese title in Japanese script on hover. That’s a life saver. (Presumably all of this is easier if you know at least some Japanese)
Though after I posted my comment I realized they mean they’re switching to another existing system (which I think is already widely used in gaming circles? Not sure though) which isn’t so bad. At least it’s not another one being added to the mix.
That would work nicely in an abstract spherical Japan in pure vacuum.
The hardest bit about redoing something from scratch is not how to design the new system, but it's in getting it adopted. Many societies have tried things like that, social inertia, especially paired with learning barriers (the steeper, the worse), and cultural and political notions (and Japan values and tries to preserve their history and culture quite a lot) is not something that can be just dismissed.
That's not to say that there weren't countries that had writing system overhauls, just that it's difficult and of questionable value and not entirely without negative effects.
You are getting downvoted, but I have heard Japan has surprisingly low literacy rates (well below the 99% stated by the government) for just this reason.
The language school I attended all but banned romanization. The idea was to learn, practice, and finally internalize kana and kanji as quickly as possible. Hepburn is just a band-aid when it comes to language study.
For people not interested in learning Japanese, however, a unified romanization could have its benefits. It just never struck me as particularly inconsistent to begin with, even after so many years living there.
Curiously enough, Hepburn romanization fixes some ambiguities in Japanese (Japanese written in kana alone) while introducing others.
The ō in Hepburn could correspond to おう or おお or オー. That's an ambiguity.
Where does Hepburn disambiguate?
In Japanese, an E column kana followed by I sometimes makes a long E, like in 先生 (sen + sei -> sensē). The "SEI" is one unit. But in other situations it does not, like in a compound word ending in the E kana, where the second word starts with I. For instance 酒色 (sake + iro -> sakeiro, not sakēro).
Hepburn distinguishes these; the hiragana spelling does not!
This is one of the issues that makes it very hard to read Japanese that is written with hiragana only, rather than kanji. No word breaks and not knowing whether せい is supposed to be sē or sei.
There are curiosities like karaage which is "kara" (crust) + "age" (fried thing). A lot of the time it is pronounced as karāge, because of the way RA and A come together. Other times you hear a kind of flutter in it which articulates two A's.
I have no idea which romanization to use. Flip a coin?
Use Ruby text alongside kanji, maybe?
Hepburn is poorly supported in some input methods, like on Windows. If you want to type kōen or whatever, you really have to work for that ō. It's better now on mobile devices and MacOS (what I'm using now): I just long-pressed o and picked ō from a pop-up.
That's one aspect I really love about macOS. I'm from a small country so nearly no one makes hardware with our exact layout, but with macOS I can always just long press to fill in the gaps. I just wish all apps used native inputs, not some weird half-baked solution they built themselves.
> I just wish all apps used native inputs, not some weird half-baked solution they built themselves.
I find this often with apps and websites, and I speak/write British English (or attempt to).
Why effort is put into making a worse interface is baffling.
What's the best way to type Japanese on Windows? (I have a QWERTY keyboard)
On mobile I just switch to the hiragana keyboard, but that obviously isn't a sane option on desktop unless I'm clicking all the characters with a mouse?
Using the example from the top-level comment, you would install an IME, switch to hiragana mode, start typing "kouen" and convert to kanji when you see the right suggestion.
It might sound complicated at first, but you can do it pretty fast once you get used to it.
https://learn.microsoft.com/en-us/globalization/input/japane...
I don't know now, but for the longest time, Google made a much better Japanese IME for Windows than Microsoft ("Google Japanese Input"). I started using it when running into reliability issues, like disappearing kanji dictionary, or frozen switching between roman and hiragana.
Assuming Microsoft's Japanese IME is still a dumpster fire, and the Google one has not succumbed to Googleshitification, that would be a way to go.
To enable the Microsoft IME there are some rituals to go through like adding the Japanese language and then a Japanese keyboard under that. It will download some materials, like fonts and dictionaries. A reboot is typically not required, I think, unless you make Japanese the primary language.
Once you have the keyboard, LeftShift + LeftAlt chord goes among the input methods. Ctrl + CapsLock toggles hiragana/romaji input. I think these are the same for Google or MS input.
Is that part of Hepburn? It is not mentioned in the article, nor by most explainers that I’m familiar with.
The article says the new style says that you can use either a macron or a doubled letter, but it's not clear if that's supported for keyboard input on various platforms.
But in the case of ō, you can only use a doubled letter if the underlying word is おお. If it is おう then you don't have a doubled letter you can use; you need "ou" and that's not Hepburn any more. It is "wāpuro rōmaji" (word processor romaji).
Compose o dash. Windows doesn't have an easy way to map in the compose key (usually ralt)?
big if true, jesus christ microsoft
https://github.com/ell1010/wincompose is like the first thing I install on any new Windows machine.
Note: bitwize is talking about how to do it on Linux. Which is the best way in my biased opinion. Perhaps not the best mapping for people who use it regularly but is awesome for those who use it irregularly. We can usually guess how to do weird diacritics without having to look it up.
Nope. When on Windows I tend to use one of the US International or the Pseudo VT320 layout from https://keyboards.jargon-file.org/ .
Hepburn also allows the use of the double vowel, in this case: kooen
I'm honestly surprised Hepburn wasn't the official standard yet. It sounds way closer to the spoken sounds, at least to my western ears.
> The council’s recommendation also adopts Hepburn spellings for し, じ and つ as shi, ji, and tsu, compared to the Kunrei spellings of si, zi and tu.
I could imagine si, zi and tu sound closer to the spoken sounds to Mandarin speakers.
The old official system arguably makes more sense from a Japanese perspective.
If you look at the kana, the Japanese syllabic writing system, they have this ordering: ka ki ku ke ko, sa shi su se so, ta chi tsu te to, etc. If you follow the regularity where there should be a "ti" sound there is no "ti" sound and it happens to be pronounced "chi".
One common analysis holds that the underlying phonemes really are: ta ti tu te to. Traditional Japanese grammarians usually analyzed it this way. And they were historically pronounced that way: it has arisen out of relatively recent sound change. Somewhat like how some British speakers pronounce "Tuesday" such that it sounds much like "Chews-day" to speakers of other dialects. Affrication in a fixed context. The t phoneme triggers that kind of affrication obligatorily in Japanese, before the i vowel or y glide.
Some disagree with this as overly theoretic and based excessively on historical linguistics, and they insist that sh and f and ch are distinct phonemes in Japanese. But the Japanese writing system itself treats it as if they were not.
If you are learning Japanese it makes sense to pick a system that reflects the internal logic of kana spelling. If you want to just approximately pronounce Japanese words in English then you want something that reflects the logic of English spelling.
These two goals are always in tension. Mandarin pinyin, for example, was designed to reflect the logic of Mandarin phonology in a consistent way. It's not meant to be easily pronounceable by English speakers. It's to enable Mandarin speakers to look up words in a dictionary or for students of the language to study Mandarin. Though it has ended up used as a pronunciation guide for English speakers. And that often doesn't go well; a lot of English speakers don't know what to do with the q's and x's.
For pinyin representation of Mandarin, these are very different sounds, while the equivalent (identical) Mandarin pinyin representation of し, じ, つ would be xi, ji, cu. I'm not as familiar with romanization systems closer to Latin pronunciations, but for Wade Giles it would probably be written like shi, chi, tsu.
Not closer to the spoken sounds, closer to English orthography.
It works better with other European languages' orthography too.
Native German speaker here. It fits very well here, too
The popularity of Hepburn has a lot more to do with the English language than the Japanese language
You mean, if you would apply the inverse of the standard romanization of Mandarin, the resulting sound would be closer to the Japanese sound, if starting from the Kunrei spelling than if starting from the Hepburn spelling?
> It sounds way closer to the spoken sounds, at least to my western ears.
That's the thing... to some other non-English language speakers, the existing/old romanization method actually is more accurate regarding how the letters would be pronounced to them, especially coming from languages that don't have the same e.g. [ch] or [ts] sounds as written with Hepburn.
The one technical downside I would say to this change is, 1:1 machine transliteration is no longer possible with Hepburn.
I don't know the details history of the system's development, however I notice that with Kunrei everything spelling is neatly 2 characters while with Hepburn it may be 2 or 3 characters:
Kunrei: ki si ti ni hi mi
Hepburn: ki shi chi ni hi mi
The politics of the issue is obviously that Hepburn is older and an American system while Nihon and Kunrei are very purposely domestic (Nihon "is much more regular than Hepburn romanization, and unlike Hepburn's system, it makes no effort to make itself easier to pronounce for English-speakers" [1]). Apparently, Hepburn was later imposed by US occupying forces in 1945.
Perhaps 80 years is long enough and suitable to effect the change officially with no loss of face.
[1] https://en.wikipedia.org/wiki/Nihon-shiki
Politics aside, Hepburn is better. You can’t seriously say you prefer “konniti-ha” and “susi-wo tabemasu”
"Better" depends on what you care about. _konniti-wa_ (which is the Kunrei-siki romanization of こんにちは, _konniti-ha_ is Nihon-shiki form that preserves the irregular use of は as topic-marking /wa/) and _susi-o_ (again, Kunrei-siki ignores a native script orthographic irregularity and romanizes を as _o_ not _wo_ ) are more consistent with the native phonological system of Japanese. In Japanese coronal consonants like /t/ and /s/ are regularly palatalized to /tS/ and /S/ before the vowel /i/, and there's no reason to treat _chi_ and _ti_ as meaningfully different sequences of sounds. Linguists writing about Japanese phonology use it instead of Hepburn for good reason.
Obviously, being more transparent to English-readers is also a reasonable goal a romanization system might have, and if that's your goal the Hepburn is a better system. I don't have a strong opinion about which system the Japanese government should treat as official, and realistically neither one is going to go away. But it's simply not the case that Hepburn is a better romanization scheme for every purpose.
Should we also change other languages’ orthographies to make them easier to pronounce for English speakers? “Bonzhoor” instead of “Bonjour”?
Japanese people don't read romanized Japanese. Even Japanese learners don't read romanized Japanese.
Romanization is, by and large, a thing that exists for people who already know European/Western languages.
We could start by standardising English, so that pronunciation was always the same for a given letter order.
> Should we also change other languages’ orthographies to make them easier to pronounce for English speakers? “Bonzhoor” instead of “Bonjour”?
Already done.
- Komen ça va? - Mo byin, mærsi.
We don't have anything against https://en.wikipedia.org/wiki/Louisiana_Creole, do we?
> “Bonzhoor” instead of “Bonjour”
English is already heavily Norman-ized. Half of our vocabulary - including the word pronounce - comes from French.
If French didn't use the Roman alphabet natively, you might have a point.
At some point you might as well use Roman characters the way the Cherokee alphabet does - which is to say, uses some of the shapes without paying attention to what sounds they made in English.
English is the top language spoken in all the world; it would be lovely to facilitate better communication with that population.
And the way English generally uses the Roman alphabet (obviously excluding the zillions of irregularities) isn't that far off from how most European languages use the Roman alphabet.
I'd expect that Spanish, German and French speakers would benefit just as much as English speakers from these changes.
> And the way English generally uses the Roman alphabet (obviously excluding the zillions of irregularities) isn't that far off from how most European languages use the Roman alphabet.
Its not far off from the union of how all other European languages use the Roman alphabet, would be closer to accurate.
Sure, but the point is this isn't really making romanized Japanese more English-like. It's making it more similar to how just about every other language already uses the Roman alphabet. This isn't an Anglo-centric thing, it's just good common sense - unless your goal is to make it harder to pronounce your language properly, which seems like an obvious own-goal.
>English
Use *h₂enǵʰ-ish please.
The political aspect might be a big part of why and how the systems are chosen. Didn't know about that!
I live in Thailand and I cannot get over the fact that romanization is (seemingly?) completely unstandardized. Even government signage uses different English spelling of Thai words.
In the first place, "romanization" of English is unstandardized! Or was that unstandardised?
It tends to be standardized within a single country.
Standardizations can be notoriously inconsistent[1], disregarded[2] or evolve fast[3].
There’s a surprising amount of interesting articles on wikipedia about that.
[1]: https://en.wikipedia.org/wiki/Ough_(orthography)#Spelling_re...
[2]: https://en.wikipedia.org/wiki/Eye_dialect
[3]: https://en.wikipedia.org/wiki/Sensational_spelling
Whoosh :)
No I understood. I just failed to see the relevance.
You should have seen Taiwan in the 1990s. It was a hot mess of older Western romanization systems, historical and dialectical exceptions, competing Taiwanese and pro-China sensibilities, a widely used international standard (pinyin), and lots of confusion in official and private circles about the proper way to write names and locations using the Latin alphabet. In 1998, Taipei even made up its own Romanization system for street names.
The chart halfway down this blog post lays out some of the challenges once a standard was instituted about 18 years ago:
https://frozengarlic.wordpress.com/on-romanization/
Thailand, famously, was never colonized by European powers. Everywhere else, some colonial administrator standardized a system of romanization.
Japan was not colonized, although it was briefly occupied.
Oh there are plenty of standards, including an official one. The problem is nobody uses them. Thai writing is weird, and between the tones and the character classes and silent letters might as well just make some shit up. My birth certificate, drivers license, and work permit all had different spellings of my name on them.
IIRC, the road signs for “Henri Dunant Road” were spelled differently on either end, which was ironic, because at least that did have a canonical Latin form.
They need to do the same for a bunch of languages, e.g. Arabic.
La li lu le lo?
Oh no.
This is going to make finding specific Japanese game roms even more annoying.
Elaborate? I’m not following.
For people not familiar with Japanese, finding any info about a Japanese-language game can be a pain. They may have a Japanese representation, an official romanized name, a community romanized name using a different system… plus may also go by an outright English-language name, in some circles, which may (or may not) overlap with the name of an English-language port (if it exists). Then consider that some games have pretty extreme and confusing name variants in various editions or on different platforms, and those may go by different names in different contexts.
You can see the same game go by three different names on a community forum, Wikipedia, and a catalogue of games + md5sums for a system (you might think the md5sum could act as a Rosetta Stone here… but less so than you’d think, especially in the specific context of an English speaker and Japanese games, as you sometimes need some specific, old, oddball and slightly-broken dump of a game to get the one a particular English patch requires… and god knows what name you’ll find that under, but probably not the same md5sum as a clean dump)
The only bright spot in this is that if you can find a Japanese game on Wikipedia the very first superscript-citation almost always lists the official Japanese title in Japanese script on hover. That’s a life saver. (Presumably all of this is easier if you know at least some Japanese)
Though after I posted my comment I realized they mean they’re switching to another existing system (which I think is already widely used in gaming circles? Not sure though) which isn’t so bad. At least it’s not another one being added to the mix.
Please bring back Fraktur.
Previously in 2024 (?):
https://news.ycombinator.com/item?id=39624972
Thanks! Macroexpanded:
English-friendly Romanization system proposed for Japanese language - https://news.ycombinator.com/item?id=42606969 - Jan 2025 (23 comments)
Japan to revise official romanization rules for first time in 70 years - https://news.ycombinator.com/item?id=39624972 - March 2024 (97 comments)
The Japanese writing system is such a mess that they might aswell redo everything from scratch and create a proper writing system.
That would work nicely in an abstract spherical Japan in pure vacuum.
The hardest bit about redoing something from scratch is not how to design the new system, but it's in getting it adopted. Many societies have tried things like that, social inertia, especially paired with learning barriers (the steeper, the worse), and cultural and political notions (and Japan values and tries to preserve their history and culture quite a lot) is not something that can be just dismissed.
That's not to say that there weren't countries that had writing system overhauls, just that it's difficult and of questionable value and not entirely without negative effects.
You are getting downvoted, but I have heard Japan has surprisingly low literacy rates (well below the 99% stated by the government) for just this reason.
Japan has an extremely high literacy rate.