Dave Sayers – Language on the Move https://languageonthemove.com Multilingualism, Intercultural communication, Consumerism, Globalization, Gender & Identity, Migration & Social Justice, Language & Tourism Wed, 22 May 2024 10:25:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://i0.wp.com/languageonthemove.com/wp-content/uploads/2022/07/loading_logo.png?fit=32%2C32&ssl=1 Dave Sayers – Language on the Move https://languageonthemove.com 32 32 11150173 Is it ok for linguists to hate new words? https://languageonthemove.com/is-it-ok-for-linguists-to-hate-new-words/ https://languageonthemove.com/is-it-ok-for-linguists-to-hate-new-words/#comments Tue, 14 May 2024 22:08:35 +0000 https://www.languageonthemove.com/?p=25330 Linguists are famously very cool with words changing their meaning, new words arising, and basically language just doing whatever the hell it wants, irregardless (heh) of what the language pedants would prefer.

‘That’s not what the dictionary says!’, the pedant bleats.

‘Ah’, retorts the wise linguist, ‘but a dictionary is simply a record of usage, not a rule book’.

Fun fact by the way:

The earliest English dictionaries in the early 1600s, like Robert Cawdrey’s Table Alphabeticall, didn’t actually list all the words, only the most difficult ones, including the rush of words being borrowed into English from French, Latin and Greek – which were much more scientifically and culturally interesting back then than boring old backwater English.

Dictionaries change

Contemporary dictionaries do change their definitions, as language itself changes. Take the English words shall and will, which used to occupy very different territories (for example shall typically appeared before ‘I’ and ‘we’, will after other grammatical subjects) but nowadays will has largely usurped shall. That’s just natural language change, and the Cambridge English Dictionary now marks shall as ‘old-fashioned’. Will is hot; shall is not.

And this is still happening today. In 2019, a petition was launched for the Oxford English Dictionary to update its definition of ‘woman’, to remove various sexist wording and to include “examples representative of minorities, for example, a transgender woman, a lesbian woman, etc.”. This caused quite a stir at the time, but the dictionary folk did what they always do – investigated changing language usage.

The Cambridge Dictionary moved first, adding an entry to its definition ‘an adult who lives and identifies as female though they may have been said to have a different sex at birth’. The OED has also moved but rather more circumspectly, simply adding an example of usage under its definition, ‘Having trans women involved added so much to the breadth of understanding what it means to be a woman.’ In this case we’re witnessing dictionaries catching up in real time, at different paces. But they do catch up. That’s their job, not telling us how to speak proper!

‘Cambridge Dictionary’s definition of ‘woman’, updated to be transgender-inclusive’

Prescriptivists and descriptivists

In academic parlance, those who wish language would just sit still and behave itself are prescriptivists. They prescribe how language should be used (just as your doctor prescribes the medicines you should take).

Linguists, by contrast, are descriptivists, simply describing language as it is actually used without passing judgement.

Or are they?

And/or, do they have to always be?

Naming no names, I have heard unguarded comments from professional linguists, irked by this or that slang term their teenage offspring come out with. Linguists are humans, and they live in human society that is full of that kind of sneering. Some of it slips through. But strictly speaking this is very much the faux pas, and might provoke a subtle change of subject at the conference dinner table.

Quotative like

A widely discussed example from recent decades is a new use of like to quote someone (‘He was like, I don’t care!’). I reviewed and modelled the research into this new ‘quotative like’, which showed teenagers leading the innovation. This new usage quickly ruffled pedant feathers far and wide. Indeed, many schoolteachers heavy-handedly banned its use under the pretence of reinforcing standard literacy. ‘You’ll never get a job speaking like that!’ etc. etc.

But the linguistic research told another story. Quotative like was doing something very special, and more importantly something previously unavailable in English. It allowed you to relate what someone said, but without claiming those were the precise words they used. Compare ‘He was like, I don’t care’ and ‘He said, I don’t care’. The first is a less explicit claim that he said exactly that, simply that he said something like that.

It’s actually a very efficient and strategic conversational device; and linguists sprung to its defence as a novel and intriguing innovation. For those few linguists who continued to privately grumble about it, and other youth lingo, eyebrows were increasingly raised.

A strip in the webcomic XKCD about research on quotative like

Evasive so

But other linguistic innovations garner more divided opinion among linguists, particularly some quirks of politicians, corporate bigwigs, and other denizens of elite circles. A widely discussed example which gained pace in the early 2010s is the use of the word so to begin a sentence. Historically a rather dull grammatical bolt simply plugging together chunks of sentences, this unassuming two-letter word has been promoted to higher tasks in recent years, much to the dismay of the pedants. As a 2015 NPR article notes,

Many of the complaints about sentences beginning with “so” are triggered by a specific use of the word that’s genuinely new. It’s the “so” that you hear from people who can’t answer a question without first bringing you up to speed on the backstory. I go to the Apple Store and ask the guy at the Genius Bar why my laptop is running slow. He starts by saying, “So, Macs have two kinds of disk permissions …”

British journalist and BBC radio presenter John Humprys long marshalled opinion against this use of so. Indeed his listeners frequently echoed the same grumble. Others went on the defensive, urging that so has been used to begin sentences for centuries.

But that defense somewhat misses an important nuance of this irritation. The new usage here is not simply beginning a sentence, but beginning a reply to a question, especially a challenging question, often with something that is not really an answer at all, and often uttered by someone in a position of power, who really should know the answer.

A famous example of that little nuance was a 2015 New York Times interview of Mark Zuckerberg in which he gibbered out some bizarrely rambling answers to very straightforward questions, for example what his new toy ‘Creative Labs’ was supposed to be. Simple question. Define the product. He responded:

So Facebook is not one thing. On desktop where we grew up, the mode that made the most sense was to have a website, and to have different ways of sharing built as features within a website. So when we ported to mobile, that’s where we started — this one big blue app that approximated the desktop presence.

But I think on mobile, people want different things. Ease of access is so important. So is having the ability to control which things you get notifications for. And the real estate is so small. In mobile there’s a big premium on creating single-purpose first-class experiences.

So what we’re doing with Creative Labs is basically unbundling the big blue app.

This spectacularly circuitous response not only patronised a professional journalist and their audience – who might just understand what a website is – but it also did something more sinister. It shirked responsibility and accountability; it kicked up a cloud of corporate haze when a simple product definition was required.

Slippery circuitousness, after all, is an important corporate skill, whether you’re not answering a journalist or not answering a Senate committee.

One reactionary pedant, Bernard Lamb, President of the Queen’s English Society, retorted of this new so: “It’s not being used as a conjunction to join things up, which is how it should be used. … It’s just carelessness, it doesn’t have any meaning when used this way.”

But he was wrong. It does have meaning, just in a new and rather more sinister way.

Doing bad things with words

‘So’, as it’s used here and in other such corporate media interviews (‘How can you justify this kind of oil spill?’ – ‘So oil spills are uncommon and we work very hard to prevent…’) is doing a huge amount of ultimately rather grubby work. Its former career as a conjunction (‘X happened so Y happened’) conditions us to see logical relevance between X and Y. Zuck and other corporate and political bigwigs use this to their advantage, to imply relevance when there is none.

And in the process, in a small but important way, that adds to their aura of elite untouchability.

Powerful people using language to trick their audiences is of course not new. Classical rhetoric gives us the term paradiastole, when a reply to a question turns a negative into a positive, or otherwise deflects and diffracts the focus of the question. (Socrates famously hated political rhetoric, inspiring his student Plato similarly.) Reply-initial so could simply be the new rhetorical kid on the block, the latest ruse in a very long tradition of ruses to distract from not having a good answer, or having one but wanting to avoid it.

Statues of Plato (left) and Socrates (right) by Leonidas Drosis at the Academy of Athens (Wikimedia Commons, CC-BY-SA-4.0)

And this brings us to where linguists might get justifiably annoyed, more so than at their teenage kids’ slang.

If a linguistic innovation is achieving something sinister, then perhaps it’s ok to hate on it. Linguists, after all, are not simply interested in sanctifying any and all words as precious gems. Linguists skillfully dissect other language use that is more obviously doing bad things – racist, sexist, homophobic and transphobic, and other discriminatory discourse.

Calling out nefarious language is ok

Laying bare when a linguistic innovation is doing something sinister, calling it out for what it is, can simply be an extension of that same important critical insight.

Funnily enough, that reply-initial so has actually been picked up by media training organisations. Corporate elites are always carefully groomed on their language, and since this particular innovation has picked up so much ire, it is now carefully ironed out. You may be hearing it less nowadays as a result.

You’ll still hear ‘I was like…’ though, because teenagers don’t have spin doctors to manage their comms, nor are they interested in fooling the public to buy their widgets or vote for them. Their interest is in being cool, as it should be.

So, criticising linguistic innovations does have its place when there are more shady forces at work. It’s like the principle in comedy that a joke is funny as long as it’s ‘punching up’, i.e. poking fun at those higher on the social ladder. As soon as the jokes begin ‘punching down’, mocking those who are already looked down upon without a comedian piling in, then it’s veering towards criticism.

New words can be fun and useful, or they can hide other more nefarious intentions. For the latter, linguists should feel comfortable punching up. It’s part of the job, alongside calling out more obviously discriminatory language. Linguists are ideally placed to pick those apart – celebrating the grammatically ingenious irreverence of teens while also throwing tomatoes at sneaky elites. So there.

]]>
https://languageonthemove.com/is-it-ok-for-linguists-to-hate-new-words/feed/ 8 25330
Will technology make language rights obsolete? https://languageonthemove.com/will-technology-make-language-rights-obsolete/ https://languageonthemove.com/will-technology-make-language-rights-obsolete/#comments Tue, 25 Apr 2017 06:16:35 +0000 http://www.languageonthemove.com/?p=20297 Something has been nagging at me recently. I read a lot of tech news, and it seems automated translation is about to get a whole lot better, and a whole lot more mobile. Meanwhile there is the burgeoning prospect of augmenting ourselves with technology to enhance our squishy human brains – the so-called ‘singularity’ of machine-human interaction. What’s nagging me is that these developments will surely have a massive effect on how people speaking different languages interact. That in turn will drive an existential shift in our approach to ‘language rights’.

But so far nobody has really said anything about all this, perhaps because the technology seems so distantly futuristic. It really isn’t though. It’s basically here already. There is some way left to go, and right now the market is still more hype than reality. So I’ll start by reviewing where we actually are now, then I’ll join the dots from there to everyone being seamlessly equipped with reliable, universal live translation. Then I’ll think through what that might mean for the field of language rights.

First is the question of reliability. Computer translation has long been a bit of a clichéd joke – accidentally translating your holiday request for a medium spiced latté into an insult about the barista’s mother. Computers can do the basics, but only humans get all the nuances right. Right? Well, that’s changing.

Current machine translation is not yet as good as humans, especially between very dissimilar languages. A recent university study of Korean-English translation, pitting various translation programs against a fleshy rival, came out decisively in favour of us air-breathers. But the programs still averaged around one-third accuracy. That’s pretty good. Meanwhile another recent controlled test, comparing the accuracy of automated translation tools, concludes that “new technologies of neural and adaptive translation are not just hype, but provide substantial improvements in machine translation quality”.

http://www.gadgetguy.com.au/product/samsung-gear-vr/samsung-gear-vr-review-2015-19/

Technology is promising seamlessly connected reliable universal live translation (Source: gadgetguy.com.au)

A recent article in The Economist shows incremental improvements to accuracy over recent decades. Translation is increasing in accuracy more quickly than ever, fuelled by advances in artificial intelligence, neural networks, and machine learning – essentially computers learning on their own, not waiting for frail humans to gradually program them during waking hours and between meals. Computers can now independently chew over vast databases of natural language, compare common patterns, and refine their own algorithms – see for example this pre-review academic paper outlining the Google Neural Machine Translation (GNMT) system. The more data that goes in, the more accurate it becomes. A recent update to Google Translate in November 2016 improved the system “more in a single leap than we’ve seen in the last ten years combined”.

So, highly reliable real-time automated translation doesn’t seem such a distant mirage.

And importantly, since it’s learning from spontaneous human input, it’s not just consulting dictionary-like formal grammars and vocabulary, but rather the way people really speak and write. This is not just good news for speakers of different languages but also of different non-standard dialects. The computer doesn’t care if you speak proper! It only cares if you speak in a way that’s approximately comparable to how other people have spoken.

This kind of live translation is going mobile too, thanks to clever phone apps like Speak & Translate, or Google Translate. Point your phone at the Ristorante Italiano, and on the screen you’ll see an Italian Restaurant – not just as blocky subtitles but as text that actually overlays the text on the sign in front of you, as if the sign were written in your language.

The next piece of the puzzle is voice recognition. Growing up in the 1980s and 1990s, I remember my dad using auto-dictation. And. Painstakingly. Reading. Out. Each. Word. But again, times have changed. The Economist article I cited earlier also reviews recent leaps forward in voice recognition, to work with unfamiliar voices and rapid speech.

Combining that with live translation gives you Skype’s new real-time translator. Search online for videos of that, and once you get past the slick corporate promotional videos you should find examples of multilingual friends taking it for a spin. Their reactions tend to range between impressed, confused and amused, so there’s room for improvement. But as I noted above, a lot of time, effort and money is being pumped into this. Expect further big advances, soon.

And again, this technology is going mobile. With apps like Google Translate you can speak into your phone, and an automated voice will speak a translation aloud. As that gets more accurate, it’ll be easier and easier to have reasonably natural conversations between languages.

But… translating your voice isn’t much use when there’s lots of other noise around. Technology has two answers here. Firstly, noise filtering techniques are improving (Kirch & Zhu 2016), and are the focus of much innovative energy – search Google Scholar for ‘voice audio noise’ and you’ll find a flurry of recent patents. Secondly, machine lip reading is advancing rapidly too: comparing human sounds with their corresponding mouth movements – so-called ‘visemes’. Your phone can’t hear you? No problem if it can at least see you (and phones needn’t limit themselves to the puny smear of light that us squinting humans rely on).

Artificial Intelligence is being applied here too, similarly outpacing clunky mammalian programmers clicking away one key at a time.

If voice recognition is improving, what about voice production? That Economist article I mentioned notes the application of machine learning to understand pronunciation. That in turn points to a future auto-translator that not only translates your words but could even mimic your actual voice.

The next piece I’ll put into the mix is perhaps the weirdest. We’ve all seen dubbed movies where the actors’ lips don’t match the translation. Pretty soon, that mismatch will disappear. Enter Face2Face, a computer algorithm developed by researchers at the University of Erlangen-Nuremberg, the Max Planck Institute for Informatics, and Stanford University. It works by filming someone making facial expressions, and then dynamically mapping those expressions onto a moving face in a video, in real-time. The absolutely bizarre result is the ability to force anyone in any video to assume any facial expression you wish, including mouthing out different words.

Useful in movies for now, but think about the likely direction of the technology. The research I noted above on ‘visemes’ could straightforwardly lead to a database of mouth movements needed for all human sounds. A computer could then artificially map facial expressions onto a moving video – that is, onto your face, as you speak into your phone or webcam.

Of course, you’d still have to speak through a device. That’s a bit awkward. But that leads me to the last piece of this increasingly futuristic (but, as I’m trying to convince you, not that futuristic!) puzzle: Augmented Reality.

Watch this 2016 TED talk demonstrating a current AR headset, and think about how that could combine with live audio translation and Face2Face. Just think it through. You could meet someone speaking an unfamiliar language; the headset could translate their voice while also augmenting their moving face; and you would hear and see them speaking your language.

http://www.npr.org/sections/alltechconsidered/2017/03/18/514299682/google-glass-didnt-disappear-you-can-find-it-on-the-factory-floor

Factory worker uses Google Glass on the assembly line (Source: NPR)

You’d look a bit weird with that thing strapped to your head; but there are already more compact AR headsets, like Google Glass – already in everyday use by many factory workers to flash up details of products in front of them. Apple is making plenty of noises about moving into AR. So is Facebook. Wearing clunky glasses is still pretty awkward, and uncool – probably why they never caught on outside factories – but in 2016 a patent was filed for a tiny AR implant that fits inside your eyeball. Who might have filed that patent? Surprise surprise: Google.

Then there are recent advances in data storage and miniaturised processing power, for example a technique to write data to single atoms (Natterer et al. 2017), and the newly created ‘LI-RAM’ microchips promising supercomputer-like power inside tiny devices. In-ear technology is already available, of course. So what if that unwieldy headset was instead little more than a glint in your eye and a bud in your ear, Black Mirror style? Not so awkward anymore. And if it featured reliable live translation and Face2Face, suddenly Babel disintegrates completely in a puff of pixelated smoke.

This is the ‘singularity’ I mentioned at the start, the merging of wobbly human parts with synthetic improvements. This is predicted in the next few decades, and is currently the subject of active venture capital. The market research firm Global Market Insights predicts a $165bn market in AR by 2024.

My point is that, once you imagine all these pieces of nascent technology floating around and rapidly improving, their journey into a single new gadget doesn’t seem remotely unlikely. And you know what rampant neoliberal capitalism really likes? New gadgets!

So, that’s part 1, the gadgetry. I give it ten years before live, unnoticeable, automated translation between anyone anywhere is utterly trivial. Ok, let’s be really cautious: twenty years. Now on to part 2: what does all this mean for how we currently think about language rights?

We move now from the field of technology, into the academic sphere of sociolinguistics and political philosophy. In broad terms, the field of language rights has three overarching aims that relate to speakers of lesser-used minority languages:

  • To pursue basic freedoms by preventing discrimination on the basis of the language you speak.
  • Beyond basic freedoms, to create a world in which speakers of minority languages aren’t alienated from normal life. This means ensuring accessibility of services in different languages. That can also mean training people to speak minority languages, to aid communication.
  • Promoting languages as important and valuable goods in themselves, emblems of cultural diversity, with a value that transcends their material benefit to particular groups.

These three broad goals are referred to by François Grin, respectively, as “negative rights”, “positive rights”, and a “third pillar … [which] cannot be understood strictly in terms of rights” (2003:84). If these are the current aims of language rights, let’s relate each one to the technological leaps outlined above.

In this future scenario, negative rights are essentially no longer relevant. Speak whatever language you like! Outright language bans tend to be based on chauvinistic nationalism and the jealous wish to hear nothing but the One True Language all around (and that One language has a funny tendency to be different in each country). Even the most ardent linguistic nationalist could simply set their translator to filter everything into beige monolingual monotony. Best of all, they could do it without bothering anyone.

Positive rights would be trivially easy to achieve, but only if all languages are included in the translation database. This then would be a new area of debate for language rights: ensuring inclusion of minority languages. That actually leads on pretty smoothly from current debates about inclusion of minority languages in education and civic life. Remember though: machine learning has the potential to make that process a lot cheaper and quicker, so it could be less burdensome than current debates over manual translation.

If all that were made a reality, then minority language speakers need never feel isolated or excluded again. Just like the majority language speakers I mentioned under negative rights, so too could minority language speakers translate everyone into their language.

The same applies to speakers of non-standard vernaculars. I pointed out earlier that machine learning works on natural input, not standard language. Your translation device could translate into any language variety you like. In fact, since your device would understand your own language patterns and your own voice best of all, it could even make everyone sound exactly like you.

As I noted above, positive rights currently also involves training other people in minority languages, so that minority language speakers can interact seamlessly with different organisations. That would change in this future scenario, if minority language speakers could hear, see and speak their language all around them at the flick of a (virtual) switch. There would be no need for anyone to be trained in minority languages, at least not to lift barriers faced by their speakers.

And this of course cuts both ways. Universal translation removes the need to bother learning ‘majority’ languages or standard varieties. It’s immaterial, if the technology enables us to understand one another regardless of the actual noises coming out of our faces.

So what about the “third pillar”? Celebrating languages as goods in themselves, regardless of whether that necessarily delivers material benefits. Actually, this one more or less gets a free pass. Even if we can all understand each other, if languages still have some other transcending value, then that’s not affected by the sorts of material barriers or benefits that concern negative and positive rights. Included within this is learning a language for personal or emotional rewards – for example this touching recent article about a First Nations Canadian learning her heritage language despite its deathbed status, or this account of a similar effort in Singapore. That rationale could continue unchanged.

The third pillar is not just something that affects individuals. The third pillar is a mainstay of many governmental policies to revitalise minority languages as important bearers of culture and heritage, above and beyond material benefits they might bring. That motivation may endure, just without the need for anyone to learn the language who didn’t really want to.

But what about achieving literacy in the first place? It doesn’t help being part cyborg and translating all the text that surrounds you if you can’t understand writing at all. Achieving literacy is easiest in your first language, so surely there remains a need for provision in minority languages, at least in terms of teaching? Well not necessarily; not if each child could simply see learning materials augmented to appear in their language, while their teacher’s voice could be auto-translated too, even to sound like one of their parents. There may come a day when there is no need to translate textbooks or train teachers in minority languages; it could all be virtually delegated.

And as I noted above, that means you can learn to read in any dialect. That in turn means the imperative to approximate any kind of standard dialect begins to fade from view. That has massive implications for other areas of language rights in relation to language standardisation and ‘correct’ language.

Technologically enhanced character in “Black Mirror: The Entire History of You”

Peering further into the future (though perhaps not very much further), there lies the possibility of your little translation gadget no longer relying on you wobbling your gooey speech organs at all. It could just read your thoughts directly. Again, this is not science fiction but a predictable advance of existing technology. It is already possible to read basic yes/no responses with electrodes mounted on a head cap (Chaudhary et al. 2017). One neurologist went further and surgically implanted electrodes on his brain, then recorded which neurons fired up as he spoke certain sounds, words and phrases. He had to remove the electrodes after a few weeks for safety reasons; and ethical approval has not yet been granted for wider testing. Nevertheless, his preliminary results suggest clear potential. Vaunted tech maestro Elon Musk has caught the scent, and launched a company, Neuralink, dedicated to the brain-machine merger. Hot on Musk’s heels, Facebook has announced similar plans. The potential end point of this tech is word-free communication, where written and spoken language are seen as mere quaint extravagances.

But wait a minute. This talk of ubiquitous live translation is all very nice, but not everyone gets the latest gadgets for Christmas. Rampant neoliberal capitalism loves new gadgets, but it also seems to love stark and growing socioeconomic inequalities. It also loves those gadgets to improve so fast that, even when disadvantaged folks get hold of them, there’s already a better model for those higher up the global elite food chain.

Still, if this is true of gadgets then it’s also true of public funding for literacy programmes and provision of services in minority languages. So today’s arguments about funding for minority language literacy programmes could be tomorrow’s arguments about equitable rollout of translation gadgets.

That might seem beyond the largesse of governments, but think about the internet. Only two decades ago it was a rather exclusive luxury; but today it’s the subject of huge government subsidy, philanthropic investment in poorer countries, and even a UN resolution.

Certainly, there would be stark inequalities in the access to AR translation technologies; but it seems unlikely that the response would be simply not to support greater access to them, just continuing to support old-fashioned literacy programmes. How would that look, while the global elite lorded over them the ability to understand all humanity? One kind of inequality (to literacy and availability of services) would be replaced by another (to live translation), but both would be addressed by very similar politics.

So, equal access to live translation could be just another new area for the field of language rights.

Reprising the three goals of language rights outlined above, negative and positive rights could in time be subsumed and transformed into debate over access to translation technologies. Meanwhile the “third pillar” could remain largely intact, though constrained to language learning as a meaningful and rewarding leisure pursuit, not urging others to learn minority languages for reasons of accessibility.

Overall then, the future may be very different; but then in other ways, the more things change the more they may stay the same. The gradual global push to equalise global internet access is bearing fruit. If the same happened with live translation facilities, the inequalities I just outlined could be overcome in time.

Given the pace of technological improvements, and the steady spread of access to other technologies, how long will it really be before we all simply and instantly comprehend each other? There will no doubt be decades of teething troubles, and people would still find plenty of ways to misunderstand each other and start fights, just as speakers of the same language do now. But there is reason for optimism in a future of worldwide mutual understanding, with freedom to speak how you like, and nobody coercing anyone else to speak a certain way. If any of those positive outcomes are possible, then I for one welcome our robot overlords.

ResearchBlogging.org References

Chaudhary U, Xia B, Silvoni S, Cohen LG, & Birbaumer N (2017). Brain-Computer Interface-Based Communication in the Completely Locked-In State. PLoS biology, 15 (1) PMID: 28141803

Grin, F. 2003. Language Policy Evaluation and the European Charter for Regional or Minority Languages. New York: Palgrave Macmillan.

Jones, Carwyn. 2015. Letter to David Melding AM ‘Committee for the Scrutiny of the First Minister: Meeting on 13 March 2015’. www.senedd.assembly.wales/documents/s44696/CSFM402-15ptn2.pdf

Kirch, Nicole & Na Zhu. 2016. A discourse on the effectiveness of digital filters at removing noise from audio. Journal of the Acoustical Society of America 139, 2225. https://dx.doi.org/10.1121/1.4950680

Natterer, Fabian D., Kai Yang, William Paul, Philip Willke, Taeyoung Choi, Thomas Greber, Andreas J. Heinrich & Christopher P. Lutz. 2017. Reading and writing single-atom magnets. Nature 543: 226–228. https://doi.org/10.1038/nature21371

Sayers, D. 2016. Exploring the enigma of Welsh language policy (or, How to pursue impact on a shoestring). In R. Lawson & D. Sayers (eds.), Sociolinguistic Research: Application and Impact. London: Routledge. 195–214. http://www.routledge.com/books/details/9780415748520/

]]>
https://languageonthemove.com/will-technology-make-language-rights-obsolete/feed/ 11 20297
Getting past the ‘indigenous’ vs. ‘immigrant’ language debate https://languageonthemove.com/getting-past-the-indigenous-vs-immigrant-language-debate/ https://languageonthemove.com/getting-past-the-indigenous-vs-immigrant-language-debate/#comments Wed, 12 Aug 2015 00:50:07 +0000 http://www.languageonthemove.com/?p=18848 "The English" migrated to their "ancestral homeland" in the first few centuries of the Common Era (Source: Wikipedia)

“The English” migrated to their “ancestral homeland” in the first few centuries of the Common Era (Source: Wikipedia)

“Indigenous languages” and “immigrant languages” are much discussed in language policy research, but surprisingly little time is spent actually defining those terms. In general, “indigenous” tends to encompass two features: a long heritage in a place; and some form of contemporary disadvantage, usually associated with prior colonisation/invasion. But those criteria are seldom explicated.

An example comes from Nancy Hornberger (1998). She compares the languages of “indigenous groups” and “immigrants”, and efforts to protect these languages – focusing principally on education. But no space is given to defining “indigenous groups”, or indeed “immigrants”. And these blurry defining criteria mean that the two are not clearly distinguished. From here some wrinkles open up, and people can get trapped inside those (more on that later).

Now compare popular articulations of indigeneity. The English (to pick a completely random example) like to see themselves as immemorially Anglo-Saxon (see Reynolds 1985), but try telling that to the sixth-century Britons being shoved westward by waves of Angles, Saxons, Jutes and Franks (who were themselves later shoved around by the Vikings, and so on). The Anglo-Saxons were once invaders, but at some point in the popular consciousness became indigenous. At which meeting was that agreed?

As I noted above, “indigenous” is not just historically significant. It relates to present-day disadvantage (by no means limited to language). This is perhaps why “indigenous” is less frequently used in European countries, whose homegrown ethnolinguistic minorities might be marginalised but not as acutely as the indigenous people of the always delightfully euphemistic “New World” – who drag behind them nasty histories of dispossession, and carry on top of them desperate social exclusion in the present (relative poverty, disproportionate incarceration, shorter life expectancy, etc.).

There are, then, deeply political resonances behind the mobilisation of a term like “indigenous”.

"Indigenous" European Minority Languages (Source: Barbier Traductions)

“Indigenous” European Minority Languages (Source: Barbier Traductions)

Now consider a piece of governmental language policy, the European Charter for Regional or Minority Languages. For “indigenous” it prefers “autochthonous”, and for “immigrant” it uses “allochthonous”. Autochthonous languages are defined vaguely as “traditionally used within a given territory of a State”, while the latter, the “languages of migrants”, are excluded from the Charter’s remit. Here we come closer to defining and distinguishing “indigenous” and “immigrant”, but not much closer.

Perhaps the clearest deconstruction of indigeneity is Anthea Fraser Gupta’s book chapter, ‘Privileging Indigeneity’ (1997). She pertinently asserts that “groups do not remain discrete, but merge, especially through marriage. Migration, language shift, and intermarriage are long established human practices. They have not stopped. It is dangerous to solidify this fluidity into policy.” This throws things into sharp relief: if “traditionally used” is a definition of indigeneity, then how long, in years, is “traditional”?

Consider Hindi in the UK. It’s a minority language with a centuries-long tradition, but happens to be associated with an ethnic group whose migration is ongoing, not ancient history.

Of course, Hindi is not a minority language everywhere, but what about, say, Potwari in the UK, ‘traditionally’ spoken in Pakistan but a minority language there and everywhere else too.

What’s that? Not traditionally spoken in the UK? Oh, sorry.

Gupta’s 1997 chapter has never been followed up substantially, or even cited more than a few times – mostly pretty superficial citations too (judge for yourself: https://scholar.google.co.uk/scholar?cites=14790778410856718429). One other useful contribution comes from Lionel Wee. In his book Language Without Rights, he argues that “the communicative needs of immigrants cannot be appropriately addressed by … the collective right of an ethnic minority group to a heritage language. … In this regard, the traditional notion of language rights will need to be recast as an individual’s communicative right to be heard and understood” (2010: 143). This is the beginning of a much needed fundamental debate in language policy research. (Sadly this point of Wee’s is something of a diamond in the rough; his book is otherwise not very good – see my rather scratchy review here.)

But this rabbit hole gets deeper. What about languages that are not only associated with migrants, but that don’t even have an intuitive ethnolinguistic heritage or a long history?

"Le nouchi ivoirien, une langue à défendre!" (Source: http://www.lebabi.net/actualite-abidjan/le-nouchi-ivoirien-une-langue-a-defendre-14233.html)

“Le nouchi ivoirien, une langue à défendre!” (Source: http://www.lebabi.net/actualite-abidjan/le-nouchi-ivoirien-une-langue-a-defendre-14233.html)

Take the creole Nouchi, in the Ivory Coast, arising in the 1980s through contact between French and various Ivorian languages. Nouchi is indigenously Ivorian but has no obvious ethnic pedigree. It arose because street traders, itinerant workers, and others in the Ivorian grey economy – who didn’t share a common language – needed to communicate. From a rich mix of diverse people striking deals, talking shop, agreeing, disagreeing, socialising, eating, dancing and falling in love, came about a more distinctive set of words, phrases, and grammatical features. This story of language genesis is as old as human speech itself. And in the worldwide context of overwhelming language death, Nouchi could be celebrated as a new indigenous minority language.

So is it celebrated? Not quite. Although a vibrant feature of Ivorian popular (sub-)culture, Nouchi is typically looked down on by mainstream media and other guardians of all that is right and good in the world, as broken French and/or a subversive subaltern code. That even includes minority language sympathisers. In a book-length discussion of Ivorian minority languages, Ettien Koffi (2012) mentions Nouchi only once (p. 207) and then only as a kind of curiosity. (See my somewhat irritable review of Koffi’s book here.)

The same fate has befallen Tsotsitaal in South Africa, another recently born creole “including elements of Zulu and Afrikaans … from the working class outskirts and townships of Johannesburg … used by (would-be) gangsters and rebellious township youth. … [L]anguages like Tsotstitaal are not legitimated … and their speakers are marginalized” (Stroud & Heugh 2004: 202).

Dynamic urban vernaculars also have a tendency to change and transform much more quickly than older languages. That is of course part of the appeal for their speakers, but another reason for indifference among those who prefer languages to sit still.

No maps exist for emergent "indigenous" languages (Source: Sueddeutsche)

No maps exist for emergent “indigenous” languages (Source: Sueddeutsche)

This kind of sneering at emergent contact-based vernaculars is common elsewhere, for example Rinkeby Swedish (Milani & Jonsson 2012), Kiezdeutsch (Wiese 2015), and Multicultural London English (Kerswill 2013, 2014) – even though, like “indigenous languages”, these are also used by minorities, spoken nowhere else on earth, and associated with poor, marginalised ethnic groups. Because they lack an identifiable ethnic lineage, and because they arose in the grubby dirt of modern cosmopolitanism – not the sacred dust of bygone ages – they paw at the lowest rung of the linguistic hierarchy.

This is perhaps the biggest problem for poorly defined terms like “indigenous” and “immigrant”: people get caught in the wrinkles between them. Speakers of emergent vernaculars are so distained they don’t even get a term of their own.

So the meaning of “indigenous” in language policy is complex, seldom explicitly defined, and even more rarely problematised. But whatever its meaning, it clearly isn’t just “us what was here first”. That in turn begs the more important question for “immigrants”: if the Anglo-Saxons ultimately became indigenous, then how long will others take to qualify? How many centuries do you have to be around? Why not decide, in years, how long it takes to be counted as indigenous, traditional, autochthonous, etc.? I hope it’s clear that I’m sketching a rather large red herring. The answer is neither possible nor desirable.

Perhaps a better solution would be to balance consideration of indigeneity with other factors, not least socioeconomic disadvantage. “Indigeneity” as currently discussed is still important: historically unjust land grabs followed by centuries of being disgracefully screwed over – continuing into the present – still need redress. But combining this with a broader focus on material wellbeing could yield greater parity with speakers of “immigrant languages”, and even of emergent vernaculars.

“[A] frequent critique of language endangerment discourse is that it displaces concerns with speakers on to a concern with languages” (Heller & Duchêne 2007: 4–5). In the wider social sciences, debate crackles and sparks over whether the “cultural turn” has over-interpreted inequality as culturally driven, stealing attention away from social class and other structural barriers (e.g. Crompton 2008: 43–44). That kind of debate in language policy is well overdue. Since her 1998 article (cited earlier), Nancy Hornberger and others have managed to dislodge the constrained focus on education in promoting minority languages. Surely the next advance should be to get beyond “indigenous”/“immigrant” as the prime categorisation, even to get beyond languages as such (an unsettling thought for a linguist), and to consider more fully the lives of the people who speak them.

Related posts: The diversity of the Other, Inventing languages.

ResearchBlogging.org References

Crompton, Rosemary. 2008. Class and stratification. Bristol: Polity Press.

Gupta, Anthea Fraser. 2002. Privileging indigeneity. In John M. Kirk & Dónall P. Ó Baoill (eds.), Language Planning and Education: Linguistic Issues in Northern Ireland, the Republic of Ireland, and Scotland. Belfast: Cló Ollscoil na Banríona. 290-299. [Pre-print available: http://anthea.id.au/papers/belfast.pdf.]

Heller, Monica & Alexandre Duchêne. 2007. Discourses of endangerment: sociolinguistics, globalization and social order. In A. Duchêne & M. Heller (eds.), Discourses of endangerment: Ideology and Interest in the Defence of Languages. London: Continuum. 1–13.

Hornberger, Nancy. 1998. Language policy, language education, language rights: Indigenous, immigrant, and international perspectives. Language in Society 27(4): 439–458.

Kerswill, Paul. 2013. Identity, ethnicity and place: the construction of youth language in London. In P. Auer et al. (eds.), Space in Language and Linguistics. Berlin: De Gruyter. 128–164.

Kerswill, Paul. 2014. The objectification of ‘Jafaican’: the discoursal embedding of Multicultural London English in the British media. In Jannis Androutsopoulos (ed.), Mediatization and Sociolinguistic Change. Berlin: De Gruyter. 428–455.

Koffi, Ettien. 2012. Paradigm Shift in Language Planning and Policy: Game-theoretic Solutions. Berlin: De Gruyter Mouton.

Milani, Tommaso M. & Rickard Jonsson. 2012. Who’s Afraid of Rinkeby Swedish? Stylization, Complicity, Resistance. Journal of Linguistic Anthropology 22(1): 44–63.

Stroud, Christopher & Kathleen Heugh. 2004. Lingusitic human rights and linguistic citizenship. In Jane Freeland & Donna Patrick (eds.), Language Rights and Language Survival: Sociolinguistic and Sociocultural Perspectives. Manchester: St. Jerome. 191–218.

Wiese, H. (2015). “This migrants’ babble is not a German dialect!”: The interaction of standard language ideology and ‘us’/‘them’ dichotomies in the public discourse on a multiethnolect. Language in Society 44(3), 341-368. DOI: 10.1017/S0047404515000226

]]>
https://languageonthemove.com/getting-past-the-indigenous-vs-immigrant-language-debate/feed/ 6 18848
Are Finns saying no to Swedish? https://languageonthemove.com/are-finns-saying-no-to-swedish/ https://languageonthemove.com/are-finns-saying-no-to-swedish/#comments Mon, 26 Aug 2013 09:40:49 +0000 http://www.languageonthemove.com/?p=14479 Bilingual Swedish-Finnish monument in Helsinki commenmorating globally beloved children's author Tove Jansson, a Swedish-speaking Finn (Source: vanderkrogt.net)

Bilingual Swedish-Finnish monument in Helsinki commenmorating globally beloved children’s author Tove Jansson, a Swedish-speaking Finn (Source: vanderkrogt.net)

50,000 people have signed a petition against mandatory Swedish classes in Finnish schools, triggering a parliamentary debate on the issue.

To assess the likely outcome of this, it’s instructive to consider some details of the sociolinguistic context (both historical and contemporary). Currently, Swedish first-language speakers make up approximately 6% of Finland’s population of five-and-a-half million, whereas the figure for Finnish sits at around 90%. These figures are almost exactly reversed in the Åland Islands (a small autonomous Finnish region located between Sweden and Finland), where Swedish is the only official language.

By Finnish national law, Swedish instruction begins at the latest in the three years of lower secondary school, with a minimum of 228 hours of instruction over those three years. Provision in upper secondary schools varies greatly, and can be as low as 16 hours total. As a result of this variation in demography and education, levels of proficiency acquired in Swedish are very mixed. There is also a good deal of resistance from pupils who become disinterested in Swedish, most notably in those areas where Swedish use is low.

Now consider the historical context. From the Middle Ages until the 19th century, Finland was ruled and governed as a part of Sweden. During this period, especially the later stages, Swedish was the language of the ruling class. In 1809, Finland was conquered by Russia, but still retained Swedish as the language of administration, justice, and higher education.

During the late 19th and early 20th century, Finnish gained ground in social and official domains due to growing nationalist sentiment. The first language law providing equal status for Finnish and Swedish was approved in 1902. Finland gained independence in 1917; and its current constitution came into effect in 1922, declaring co-official status for Finnish and Swedish (partly in order to see off Russian). In Finnish society today, Swedish is generally spoken more in the coastal southern, south-western and western regions, as well as in larger cities due to migration.

The petition reflects heated civic debate with passionate arguments on both sides. Ultimately though, it seems likely that the Finnish Parliament will not actually grant the wishes of the petitioners. There are several reasons…

First and most obvious is the co-official status for Finnish and Swedish, enshrined at the highest level in the national constitution. Mandatory Swedish education was not explicitly specified in the constitution, but subsequent laws have formalised that requirement. Whether constitutional amendments were deemed to be necessary, or just repeal of individual laws, decisive consensus would be needed from Finnish MPs – in a relatively diverse multi-party system ill-suited to radical change.

Second, mandatory Swedish in education began with a compromise in the 1970s involving reciprocal mandatory Finnish in Swedish-speaking municipalities – and so any change could affect both languages, which may be unappealing to Finns and seen as a risk to national unity.

Third, Finland is a signatory of the Declaration of Nordic Language Policy which aims to strengthen the teaching of Scandinavian languages. Finnish is not a Scandinavian language, and although Finland is a Nordic country (along with Iceland, Denmark, Norway and Sweden), it is not consistently seen as part of Scandinavia (which tends to refer to just Denmark, Norway and Sweden) and so this could be seen as weakening Nordic ties – one may also speculate about Finnish consequently losing favour in Sweden’s schools, where it is taught in many border and coastal areas.

So, radical change may seem unlikely. Nevertheless, having said all this, it is worth pausing for a moment to assess the weight of opinion in this petition. The Finnish Parliament’s established threshold of 50,000 signatures might seem modest, but that is almost 1% of the Finnish population – the equivalent of requiring around 600,000 signatures in the UK, or around 3 million in the USA. For further perspective on this weight of opinion, the most signed petition on the UK’s official petition site currently has 266,327 signatures – around half the level of support for this Finnish poll by proportion of the population. So this is no fringe movement. Meanwhile, the Association of Finnish Culture and Identity runs periodic surveys showing broad support for removing the mandatory provision of Swedish. Then there’s the conspicuous rise of the nationalist ‘True Finns’ party (a bulwark of the anti-compulsory Swedish campaign), who now hold about a fifth of Parliamentary seats.

The lively critiques of mandatory Swedish range from utilitarian critiques of the usefulness of Swedish globally, all the way through to conspiratorial grumblings about powerful shadowy Swedish-speaking élites skewing Finnish corporate hiring practices. This latter aspect is troubling not least because it is so reminiscent of the sorts of malevolent conspiracies peddled elsewhere throughout history, about minorities seen as secretly pulling invisible strings.

In the end, the petition, the right-wing electoral upsurge, and the heating up of this old debate, could just be a historically familiar insular reaction to economic woes. It could just be a cloud that lifts with economic recovery. Nevertheless, that recovery is not expected imminently: real-terms declines in earnings are projected for years to come in Finland. So, at the very least this debate will lumber on for some time. Add to this the growth of migration to Finland – in particular Russian-speakers, projected to outweigh Swedish-speakers by 2050 – and the debate becomes even more complex and diffuse.

Whichever route Finland eventually chooses, it is unlikely to resolve the debate definitively. Finns are a judicious and cautious people. The trajectory of the debate can be summed up by an old Finnish proverb, which roughly translates as ‘better to go a mile in the wrong direction than take a dangerous shortcut’.

]]>
https://languageonthemove.com/are-finns-saying-no-to-swedish/feed/ 52 14479