No, it’s not ‘awesome!’ when a bot corrects my thoughts and my ...

Ideas

No, it’s not ‘awesome!’ when a bot corrects my thoughts and my spelling

It's helpful that Gmail fixes errors, but is its jaunty little feature taking over our thinking?

Olivia Rudgard


Ever responded to a work e-mail with “Sounds great!” Or, simply “Sure!” or “Yes!”? Me neither. But that doesn’t stop Gmail suggesting I do so, almost every day.
Google’s e-mail service, which is used by more than a billion people worldwide, began adding these jaunty suggestions to the end of our e-mails in 2017, and its cheerful robot has now expanded its repertoire to finishing our sentences in the form of Smart Compose, launched in 2018. Now, if you put someone’s e-mail address, with their name, in the address field it will suggest you start your message with their name. If you begin a sentence such as “Looking forward to meeting you later this ... ”, it may suggest “week”, or “month”, to save you the trouble of writing it.
Recently our happy e-mail assistant has gained a skill – subject lines, arguably the worst parts of any e-mail. How do you explain what your e-mail is about and get the person to open it, all without being irritating or intrusive? So when it started suggesting them to me I was relieved, especially as the first one, asking a tech company spokesperson to comment on a story, was sensible and sober: “Request for comment: [the subject of my story]”.
It’s not perfect though. On another, very carefully worded e-mail to a firm asking them to respond for a story, the software suggested: “A bit of advice.” Not quite what I was going for.
The system is, in many ways, just the latest technology eager to help us express ourselves since spellcheck – first used on mainframe computers in the late 1970s. Since then auto-correct has emerged on smartphones, fixing spelling mistakes and sometimes changing the words we use to what it thinks makes sense, whether or not it’s what we meant to type. It can be useful, but it’s also frustrating.
Speakers of British English, invariably with American English installed as default on their computers, are constantly entreated by wiggly red lines to “fix” perfectly correct spelling of words such as theatre, defence, organise and neighbour. As a result, we are anxious that US tech firms are pushing a standardised version of English across the Atlantic, to stamp out our older, less phonetic  – sometimes less logical – spellings.
Perhaps we can be heartened by the idea, from UK-based American linguist Lynne Murphy, that the British have responded with “orthographic patriotism”: they are more aware than ever of the difference between their lexicon and that of their American cousins, and keener than ever to preserve it.
The language we use is part of our identity, and it makes us feel like we belong somewhere. But can it also change the way we think?
At Facebook’s F8 conference in California last week, Instagram revealed a “nudge” feature. It stopped short of outright censorship, but will tell users when they write something which others might not like – in the firm presentation, a user writing “you suck” is told by the software that others might find this offensive.
This is all in a very good cause. The app is used overwhelmingly by young people, and teen users have endured horrific bullying there, including being inundated by nasty comments or mocked on accounts created specifically for the purpose of humiliation. By admonishing customers and trying to change their language, Instagram is seeking to make them better, more polite people. At least online.
The jury is still out, academically, on how much our language affects the way we think. Linguistic determinism – the idea that we are entirely constrained by the structure of our mother tongue – is generally thought to be too strict. But a weaker version is still debated: whether the words we use affect our thoughts in some way. Similar ideas are used by proponents of political correctness. If we stop using words that are demeaning to women or to ethnic minorities or disabled people, the argument goes, we might improve our society (as well as the short-term benefit of making them feel more comfortable).
I wrote this piece in Gmail, so at least part of it – the end of a sentence, a few extra words – was suggested by a robot. It’s helpful, yet I worry that all of our writing is being subtly moulded to become less distinctive and more predictable, more in line with what the algorithms tend to see.
If we let computers be our voices, do we also risk letting them constrain the way we think? For one thing, the insincere enthusiasm of “Awesome!”, “Great!” and “Cool!” could become standard; a vision horrifying to anyone unused to Silicon Valley communication, or who just prefers more low-key words. Google’s automatic e-mail bot is designed to be helpful – in its blog announcing the subject line feature, the company said Smart Compose had already saved people from having to type a billion characters, every week. The suggestions, a spokesperson told me, are intended to be a “rare and delightful feature”.But it isn’t very good at letting people down. Probably not the robot’s fault – while my positive e-mails tend to be to the point, I rarely type just “No”, or “I don’t think so”, in an e-mail reply to a request. Letting your boss down is a tricky social challenge for any human, let alone a Gmail robot with an unhealthy fondness for exclamation marks.– © Telegraph Media Group Limited (2019)

This article is reserved for Times Select subscribers.
A subscription gives you full digital access to all Times Select content.

Times Select

Already subscribed? Simply sign in below.

Questions or problems?
Email helpdesk@timeslive.co.za or call 0860 52 52 00.