Four (4) reviews by eminent authorities in the field of linguistics or languages are given below.
Review by Dr. Nicholas Ostler
- Author, most recently, of The Last Lingua Franca: English Until the Return of Babel (Walker & Company, 2010); Empires of the Word: A Language History of the World (Harper Collins, 2005); and of several other works in the fields of language and linguistics.
- Founder and Director of the Foundation for Endangered Languages.
- Dr. Ostler received degrees in Greek, Latin, philosophy and economics from Oxford University and a Ph.D. in linguistics and Sanskrit from the Massachusetts Institute of Technology under Prof. Noam Chomsky.
- Dr. Nicholas Ostler on wikipedia
Review by Prof. Mark Newbrook
- Formerly Professor of Linguistics in various universities in Singapore, Hong Kong and Australia (U. Western Australia and Monash U.), Prof. Newbrook has carried out wide-ranging research in the fields of dialectology and “fringe” languages. He has served, most recently, as a linguistics consultant to various organizations.
- Dr. Newbrook studied classics at Oxford and linguistics at Reading, taking his Ph.D. there in 1982.
Review by Prof. S.R.S. Jaafar
- Professor of Linguistics at the School of Language Studies & Linguistics, Universiti Kebangsaan Malaysia, MALAYSIA. Prof. Jaafar is widely published in varied fields of linguistics.
Review by Dr. Christopher Moseley
- Editor-in-chief of UNESCO Atlas of the World’s Languages in Danger (UNESCO Press, 3rd Edition, Paris, 2010- includes online interactive version); Editor of Encyclopedia of the World’s Endangered Languages (Routledge, 2007); Author of Writing Systems (Critical Concepts in Linguistics) (Routledge, 2014); Co-Editor of Atlas of the World’s Languages (Routledge, 1994).
- Board Member, World Oral Literature Project (Cambridge University, Yale University).
- Dr. Moseley is a world-recognized expert in writing systems of the world.
Review by Dr. Nicholas Ostler
Navlipi (despite its Sanskritic name, which means “new-script”) is a systematic extension of Roman script, with a number of aims in view: To be a practical (legible and writable) script for all the world’s languages, but at the same time to represent the languages’ sounds exactly and consistently, making no compromises on the phonemic principle. In this ambitious goal, it goes beyond existing scripts: Beyond ordinary Roman scripts, because it requires that its symbols are interpreted the same way everywhere; beyond phonetic scripts such as the International Phonetic Alphabet, by representing phonemes singly, rather than as a set of phones; and beyond all the other scripts, by attempting to replace every single one of them without loss of significant phonetic detail. (Chandrasekhar resigns himself to the loss of any historical and etymological traces that may survive in some languages’ writing systems.) As such, it aims to be a technical tool for the analysis of languages (for linguists), at the same time as it serves as a practical orthography for every language in the world.
This is a stupendous aim for a single system created by a single scholar, and its author, Prasanna Chandrasekhar, realizes that his chances of success are slim. Nevertheless, the fact that a single human vocal tract is capable – with the right exposure in youth – to articulate any one of the world’s languages perhaps encourages us to believe that it would be possible for a single written script – with enough of the right diacritics – to encompass every language, all without compromise in showing all the significant distinctions to be made in the language.
The main obstacle to Chandrasekhar’s achievement is the phenomenon of “phonemic idiosyncrasy”, whereby the actual speech sounds are organized into different, and cross-cutting, significant sets in various languages: For example, p, whether aspirated or unaspirated, is the same phoneme in English, but the two versions belong to contrasting phonemes in Hindi, where (however) f is heard as the same sound as aspirated-p. By juxtaposing letters, Chandrasekhar conjures up new symbols that represent directly the complex phonemic reality. As a result, no language can be successfully written in Navlipi unless its phonemic system has been structurally analysed, which is perhaps no bad thing. Unfortunately, though, phonemic analyses tend to be controversial. This could put a brake on implementation.
The world may well be “too much with us” for Navlipi to stand very much chance of widespread adoption as a practical script: One is reminded of the very short-lived success of even Khubilai (Kublai) Khan when he commissioned the brilliant scholar ‘Phagspa to create a common script for all the languages of his empire, from Persian and Tibetan to Mongolian and Mandarin. The world tends to set its communication standards for historical reasons, and to suit the powers that be, rather than any academic ideal. But the script is also a dramatic object lesson in the constraints of phonemic analysis, and so may enjoy some popularity among linguists for its technical aims.
The attempt to have all the possible virtues of a phonetic writing system at once – on the basis of single man’s ideal – is what makes this a heroic endeavour.
Review by Prof. Mark Newbrook
As its title suggests, this book presents a proposal for a new cross-linguistic alphabetic writing system, which could, in principle, replace both a) the International Phonetic Association Alphabet (IPAA) as employed by linguists for the transcription and teaching of all languages and b) existing language-specific scripts as used for everyday purposes, including the current spelling of English and other languages. Chandrasekhar’s system does not seek to reflect i) etymologies or other aspects of linguistic history (‘diachronic’) or ii) more abstract relationships between the forms of words and word-parts; it is strictly ‘synchronic’, and it is grounded in the phonetics of words and in ‘shallow’/’surface’ aspects of the phonology (as reflected in the usual phonemic representations). The core of the Navlipi system involves the 26 letters of the most familiar version of the Roman Alphabet (that used for English) together with five novel symbols. These 31 characters are subject to various modifications of form which systematically correspond with phonetic variations of the phone-types in question so as to represent the very many specific phones (sounds) found across the languages of the world. Chandrasekhar also includes further devices for showing phonemic tone – needed for transcribing ‘tonal’ languages such as Chinese – and other such ‘supra-segmental’ features.
Chandrasekhar’s title itself appears overstated (he cannot have examined all of the world’s 6,000+ languages, even through the work of others), and he is not himself a linguist (he is a chemist); but he has studied many languages and aspects of linguistics, and his actual discussion emerges as much more sophisticated about linguistic matters than that of most non-linguists who have proposed reforms. He is well-informed, his scope (while understandably displaying a particular focus upon India) is wide, and many of his individual points (general and specific) are themselves correct. Indeed, the book deals interestingly with methodological issues involving phonetics and script-design. Overall, it has to be taken seriously. And the Navlipi system itself does have important strengths; for instance, it is indeed (predictably) more systematic than IPAA (which has ‘evolved’ over many decades in the hands of many linguists).
Chandrasekhar is especially concerned to address what he sees as an ‘urgent issue’: the phenomenon which he (oddly) describes as the phonemic idiosyncrasies of different languages. This involves the fact that, even where some specific phones are shared between languages, said languages often group them differently into phonemes. However, Chandrasekhar appears confused as to the views of mainstream linguists regarding the general cross-linguistic patterning in respect of these matters, and attacks a ‘straw man’; and (unless his wording is very poorly chosen) his own position on this issue is clearly mistaken.
The most important general problem with Chandrasekhar’s work involves the distinction between, on the one hand, a) phonetic transcription systems such as IPAA (normally language-neutral and intended for technical linguistic work or the teaching of foreign-language phonetics), and, on the other, b) language-specific phonemic transcription systems (such as those based on IPAA) intended both for technical work and (by spelling-reformers and by linguists inventing new scripts) for actual everyday use. Chandrasekhar appears to be attempting to cover both of these sets of requirements at once, with no reasonable grounds for expecting success proportional to the efforts involved. Systems which are suitable in one of these contexts may not be at all suitable in the other. There are also some other problems with Navlipi.
Nevertheless, as noted, this book is very much worthy of attention by all with a serious interest in writing systems.
[For a fuller version of this review, see The Skeptical Intelligencer (ASKE, UK), 15 (2012), pp. 11-14.]
Review by Prof. S.R.S. Jaafar
This amazing book is a welcome addition to the literature on phonetics. It addresses meticulously the problems of phonemic idiosyncrasy across language families. Previous research and books on phonetics have mostly focused on a particular phonetics system of a language. This book sheds light on the world of phonetics; it pays formal attention to phonemic idiosyncrasy by examining and considering the problems in all the world’s languages. Most of the materials discussed in the book are suitable for all levels of readers, i.e. from a beginner student of linguistics to the layman. However, I think there are some chapters, mostly in part three, for which need readers to have a basic knowledge of phonetics before reading them so that they can then easily follow the discussion.
The secondary aims of the book are to provide a universal script for India that links the ‘Aaryan/ Dravidian’ (North/ South) divide. These two parts of India, as claimed by the author, have different ways of pronouncing aspirated and unaspirated plosives. In a Dravidian South Indian language variety such as Tamil, aspirated and unaspirated plosives like [t] and [th], respectively, are claimed to have no distinction between them. We are most likely to hear a word like tatta ‘a hit’ pronounced as thatta by speakers from this part of India. As these phonemes have no distinction when pronouncing them, therefore speakers might have difficulty in distinguishing aspirated and unaspirated phones, in contrast to northern India where the aspirated and unaspirated phonemes are clearly distinct. On top of that, this book also seeks to address existing Romanised transcriptions which are still inadequate. The considerable adaptation of Chinese ancient ideographic script to twenty-first century use is said not to be sufficient for use in intellectual discourse. Many of today’s Chinese have requested a new ‘Romanised’ script which is more easy and palatable to learn. Besides that, the book also seeks potential new markets, e.g. for Turkic languages.
The book engages in a very good phonetics discussion which considers the problems raised by phonemic idiosyncrasy across languages. The author brings up the idea that phonemic idiosyncrasy can also lead to variations or accents in human utterances. In phonology, for example, variation has largely been discussed as a phenomenon of different grammar having different phonological systems. Thus, they are differences in pronouncing words. Variation was initially seen from the viewpoint of sociolinguistics, i.e. it was due to external factors such as sex, age, style, register and social class (Anttila 2002, p. 206). As well as external factors, variation is also due to internal factors, such as morphology, phonology, syntax and lexicon (Anttila 2002). Phonemic idiosyncrasy, which is a topic of discussion in phonetics, could be one of the internal factors which is also a conditioning variation. The author puts forth this great idea. The book stresses the importance of having a single writing system or universal orthography in order to comprehend other languages. A universal orthography would be able to assist non-native speakers, e.g. English speakers, in that [v] can also be pronounced as [w] when reading Hindi/Urdu. The book is divided into four parts which consist of ten chapters altogether. The first part of the book summarises all the Navlipi tables. Second is the Introduction, which comprises three chapters, Chapters 1 to 3. The third part is a presentation and discussion of NAVLIPI. Finally, fourth is the part of the book containing the glossary, literature cited, index and details about the author. Here now is an overview of what each of the first three parts discusses.
Part 1: The reader is provided with all the NAVLIPI tables before going any further. A summary of the tables is presented in this part in the hope that the readers will get an overview of what the discussion in the book focuses on. In the middle of the discussion, particularly in Part 3, the reader is asked to refer to the summary tables presented earlier for a better picture of the topic discussed.
Part 2: Chapter 1 mainly focuses on the need for a universal orthography (script) for all of the world’s languages, as well as discussing and presenting critically the drawbacks of earlier orthographies such as those of Graham, Watt, Lepsius and many others. In discussing the need for a universal script for all of the world’s languages, the book raises the so-called phonemic idiosyncrasy, which is defined by the author as the existence of very different sets (usually pairs) of phones, whereby allophones of the same phoneme in one language exist as distinct phonemes in another language. The idea of phonemic idiosyncrasy, which is discussed in depth in the book, offers a different point of view regarding the different pronunciations produced by different speakers. For example, the English word fly may be pronounced as fry by a Japanese speaker, as the alveolar lateral [l] is said not to exist in Japanese phonemes. This phoneme is thus always replaced by the alveolar central flap [r]. Based on the example given, I am in agreement with what the author claims. A similar situation occurs in Malay, when Malay speakers pronounce Arabic words like ‘fikr’ and ‘fahm’. Since the labial fricative [f] does not exist in Malay inventory phonemes, the words are therefore pronounced as [piker] and [paham], respectively, by Malay speakers.
Meanwhile, Chapter 2 highlights the objective of a universal orthography and how NAVLIPI satisfies this. This chapter discusses ten major requirements that a universal orthography must have in some detail. These requirements are: universality and completeness, recognisability, distinctiveness, simplicity and intuitive nature, ease and rapidity of transcription from three points of view – keyboard, cursive and print, systematic scientific classification and accuracy, discretization, practical phonemics rather than phonics, voice recognition compatibility and the ability to accommodate the phonemic idiosyncrasies of all the world’s languages. The discussion of each requirement is supported by the drawbacks found with earlier orthography systems, such as that of the International Phonetic Association (IPA). As all the requirements are well discussed, it explains why a new universal script for all the world’s languages is required.
New scripts which are based on a scientific or systematic classification of phones as well as those which are newly created or entail considerable new innovations are also discussed in the book. Tibetan, South-East Asian (Khmer, Thai, Burmese, etc.), Mongolian Phagspa and also Haangul (or Hangul) scripts are examples which are included of this type. Haangul, for instance, is claimed to be an innovative and scientific Asian script, as it has several major advances, startling and innovative features. The use of a specific symbol which is iconic or notational for a specific phonic property is one of the scientific elements of the script. The script does however have some drawbacks, as proven by the author, as it lacks recognisability and the cursive forms of writing Haangul. Another example is the Graham alphabet. It has been shown that the Graham alphabet is an incomplete script, since the vowel sounds occurring in English in the words air and ale, or up and cur, are given the same glyph. The vowels in these words should have been given a correct glyph according to the way they are pronounced. There are many examples to be found in the chapter as we go further. All the three chapters discussed in Part 1 convincingly demonstrate the deficiencies found in earlier orthographies, which have posed quite a number of problems in terms of representing phonetics forms of human speech sounds.
Part 3: The third part is the kernel of the book. It consists of seven chapters altogether, i.e. Chapters 4 to 10, which present and discuss NAVLIPI. The discussion begins with how a new NAVLIPI script was made. A shell matrix or template, comprising a phonological classification, was used before choosing letters. At this stage, appropriate examples of words are given in an empty matrix to illustrate phones, but no glyphs (letters or symbols) are assigned to phones yet. Once the shell matrix is full, only then will glyphs of the phones be chosen carefully. A five-dimensional vowel classification matrix and a sixth variable for vowel duration are also taken into account in the new script of NAVLIPI, as they have caused some difficulties. In order to construct the new script, five types of example scripts, i.e. a geometric script, a script based entirely on POST-OPS, i.e. post-positional operators, a simple version and a complex version or adaptation of DEWANAAGARI script and an adaptation of PITMAN SHORTHAND, were tested. None of the example scripts above has achieved the aims of a new universal NAVLIPI script as they lack universality and recognisability criteria, except POST-OPS. This script was therefore referred to when constructing the new universal NAVLIPI script. In NAVLIPI, however, a limited number of post-ops is used. It uses fewer new letters than other scripts.
The book is well structured and organised, as it begins by presenting all the problems with earlier orthographies (scripts) for the world’s languages before discussing the proposed NAVLIPI script. This presentation makes it easy for the reader to follow and understand the discussion, and yet still be able to see the advantages in proposing NAVLIPI. In the course of the discussion, the book provides plausible and critical arguments and the reasons for selecting the letters for NAVLIPI. In short, every glyph that was chosen for the new universal script of NAVLIPI is clarified and its selection justified.
This new universal NAVLIPI script has come at the right time, as the IPA Revised 1993 (International Phonetic Alphabet), which has undergone quite a number of revisions, does indeed need to be replaced by a complete set of phonetics symbols. The IPA chart is seen to have many drawbacks in presenting the sound systems of a number of languages. This most likely results from the intention when IPA was first created with the original alphabet being based on spellings in English. With the aim of making it usable for other languages, the symbols in IPA were then allowed to vary.
- Anttila, Arto. 2002. Variation and phonological theory. In Jack Chambers, Peter Trudgill, and Natalie Schilling-Estes (eds.), Handbook of language variation and change. Blackwell, Oxford, U.K., and Malden, Massachusetts. 206-43.
- https://en.wikipedia.org/wiki/History_of_the_IPA. Accessed 12 November 2012
(This review first appeared in the journal 3L: The Southeast Asian Journal of English Language Studies – 2012, Vol 18(4): , pp. 227 – 230. )
Review by Dr. Christopher Moseley
The name Navlipi is one of a number of new terms which are introduced in this unique volume. In it, Dr. Prasanna Chandrasekhar proposes a most ambitious scheme, one which has eluded linguistic science for centuries: A method of reducing all the world’s major languages to writing in a uniform way. The name itself derives from Sanskrit, and means ‘new script’. It is nothing less than a new script that Dr. Chandrasekhar is offering the world in the pages that follow.
Dr. Chandrasekhar has made a study of all the world’s more commonly used scripts and compared their efficiency in rendering the languages they represent. His project was originally a unified script for the languages of India, but he soon extended its mandate to cover all the languages of the world. Navlipi, however, is not based on Devanagari or any Indian script, but on Roman, the most widely adapted script in the world. Furthermore, he has devised a scheme where the 26 standard Roman letters are supplemented by only five more. Tones and other supraphonemic features are also catered for, by a system of ‘post-ops’ (postpositional operators). In other words, it differs from the International Phonetic Alphabet (which aims at the same comprehensive universality) in not attempting to greatly extend the range of distinct graphemes, but rather, aims at the most economic use of the existing inventory, very modestly extended. What is more, its inventor claims and demonstrates that it can be used in a cursive version in handwriting, in addition to the inventory of letters for printing.
Dr. Chandrasekhar’s academic background is in chemistry, and his current work is in the defense contracting industry, but his ethnic background places him in a multilingual, multiscriptal society. An idea like Navlipi was most likely to arise in India, where numerous scripts compete for the eye’s attention in everyday life, and an inquiring mind such as the author’s was moved to try to distil them into a single uniform writing system.
The author sets out his alphabet in the form of tables which clearly show the phonemes represented by each letter, grouped by place and manner of articulation, rather like a phonetic chart, He does not comment in the tables on the frequency of each phoneme in the world’s languages, except to state where it is negligible. There have been many claims of a perfect fit between the written script and the spoken form of some of the world’s languages (such as Hangul and Korean), but so far no claim has been made for a perfect fit of a single script for all the world’s languages.
The Navlipi script has been put extensively to the test on a wide range of languages, and the test transcriptions make up a large part of the original text of the volumes presented here. Its accuracy in rendering the phonemic distinctions in each language will be weighed, by a native-speaker audience, against the possible sacrifices of etymological transparency. However, it is one of the objects of Dr. Chandrasekhar’s project that phonological consistency outweighs etymological or phonemic ‘idiosyncrasy’, as the author calls it. In terms of phoneme-grapheme correspondence, Navlipi is demonstrated to be faithful to the sounds of a language while not being over-complex to write. Dr. Chandrasekhar has made exhaustive comparisons even with scripts which are confined to use with one language (Cherokee, Varang Kshiti and many others).
And what is remarkable about the author’s researches is that he has given each of these scripts a rigorous test for universalising it – applying it to the full range (as he sees it) of the world’s contrastive phonemes – and in each case he finds them wanting. Their lack of adaptability lies not merely in the impossibility of reassigning redundant graphemes (in other words, a restricted range of possible written signs), but also in less quantifiable, or more ‘relative’ ways, such as recognisability and intuitiveness. The primacy of the alphabet in its traditional guise – that is, with upper and lower case letters, and cursive variants – is clearly evident to Dr. Chandrasekhar. However, it is to his credit that he adopts this option only after thoroughly testing the alternatives. What is also attractive to him, one can’t help feeling (and this relates to his concept of ‘intuitiveness’) is the perfectly-balanced degree of contrast between letter-shapes in an alphabet like the Roman one. Perfectly balanced, that is, in terms of visual perception, brain-to-eye co-ordination.
This brings the user of Navlipi to the issue of variant letters vs. diacritics. Dr. Chandrasekhar has deliberately avoided diacritical marks of the accepted type (cedilla, acute, &c.) above and below the base-letters to indicate a change of phonemic quality from the base. Rather, each letter is to be considered as a complete, separate and organic unit. But that is not to say that the new symbols are not clearly derived from older ones, or that they bear no organic relationship to letters without these extensions. The extensions are of two main types – bars within the letter, and the so-called ‘post-ops’, which are actually adjuncts written to the right of the base letters. These indicate non-segmental features such as tone, nasalization and the like. This may be taken to be the minimal distinctive variation in an alphabet consisting only of primary symbols, with no secondary or optional members of the inventory. Yet of course these newly created symbols will in some cases be optional – for those languages that do not possess the phonemes in question.
The author’s coverage of the range of possible ‘post-ops’ will give an indication of how comprehensive a range of languages and their contrastive phonemes can be accommodated by this scheme. The ‘post-ops’ are the simplest and readiest solution to the problem of adapting what is essentially a 31-letter alphabet to all possible phonemic environments. It is interesting to speculate on the effect on literacy in many rarely-written languages that this scheme would have. What looks at first like an attempt at an accurate transcription system for linguists could, the author suggests, be a useful vehicle for everyday writing in any conceivable language.
The author’s guiding principle in creating the alphabet and its attendant ‘post-ops’, then, has been to take note of the frequency of phonemes, in the major languages of the world with which he is familiar, in assigning, reassigning or creating the distinct letters, while allowing for the less frequently occurring ones in his maximally economic system of ‘post-ops’. It is, in this sense, primarily an alphabet for practical everyday use with any language. The forms of the new letters themselves have been created bearing in mind their associations with already existing letters.
In dealing with tone, the author has had to be especially thorough. Each language where tone is contrastive (Chinese, Vietnamese, Igbo, to take some obvious examples) has its own set of contrastive oppositions. Dr. Chandrasekhar demonstrates exhaustively the unwieldy nature of the renderings of these contrastive tones (ranging from the mandatory system, in Vietnamese, through the semi-optional marking system of Igbo, to the official and semi-official transcription systems of Mandarin and Cantonese) and posits his own uniform system for showing tone. He goes further, and shows how tone could be marked in Navlipi even for languages where tone is predictable but not completely phonemic, such as Swedish. The number of speakers of tone-languages in the world is formidable, and Navlipi is presented as lending itself especially well to the rendering on these languages consistently.
Who, then, is the potential ‘user of Navlipi‘? The author contends that his original aim was to bridge the gap, using a Romanized system, between the discrete scripts of India – the Devanagari and its variants that emanate from Braahmi, including the scripts of the Dravidian languages of south India, as well as the smaller scripts such as those for the Munda languages. He soon realised, however, that his invention had potential use, with easy adaptation, to many other non-Roman, or at least non-alphabetic, scripts – Arabic and Chinese for instance – as well as those national alphabets that are still wavering between systems, such as some of the languages of former Soviet Central Asia. Thus he looks forward to adaptation on a national, indeed a multinational, scale, if not a fully international one. He does not use the term ‘auxiliary’ and does not entertain the notion of a traditional script continuing alongside the use of Navlipi for teaching purposes.
Persuading the world to adopt Navlipi presents quite a challenge, of course, and the author is well aware of the difficulties he will meet. He presents, in one chapter, both the arguments for and the arguments against its adoption. Who are the actual decision-makers in such cases? The national Academies, where they exist? Governments? Common popular usage? The press? There is no single answer, and the author addresses himself to both the linguistic scientist and the lay reader, and rests his case.
The author’s vision stretches both backward and forward in time, as concerns the implications of this script: It can be used to transcribe ancient languages; and on the other hand, it can be adapted to voice recognition technology.
It is refreshing to find this basic issue in linguistics tackled from the point of view of someone versed in the physical sciences. What you find in the following pages and the ensuing volumes is a comprehensive exposition of a theory which is put to rigorous testing. The author does create his own terminology, which might meet with some resistance from those used to the terminological conventions of linguistic science – but he is internally consistent. Some of his terms – such as phonochromaticity – are directly analogous with terms in the physical sciences. Where a new term is introduced, it is explained fully.
I commend this book to any reader who is interested in the age-old problem of rendering all languages uniformly in writing. It has been tried before, by Lepsius in the nineteenth century and several others, but the present volume may prove to be the most comprehensive attempt yet made.