The Insidious Sameness of “Little A.I.”

We’re all thinking about what you might call “Big A.I.” — the tools that take over your tasks entirely, doing “your” research, writing “your” words, drawing “your” conclusions for you — and the serious questions it raises about the future of creativity. But there is also what I’ll call “Little A.I.”, which, in its quiet way, is just as troubling.  

Almost every time I write an email, a little blue line will appear beneath a sentence, suggesting that I’ve used too many words, or maybe that slightly different ones would be more appropriate. And this isn’t confined to email. These confident little robot gremlins, full of helpful advice and suggestions, now appear in almost every margin and sidebar, “enhancing” just about every piece of software we use with a “new A.I.-powered experience”. 

Obviously, the programmers of these applications are looking for ways to use the hot new technology. I’m sure that they are sometimes creating something they genuinely think is useful. But often, I suspect, they are doing it because it is the buzzy thing to do, and/or because their company just spent a metric buttload on A.I. infrastructure and they have been ordered to find a way to shoehorn it in. The results of this rush to incorporate are weird and dangerous in ways that I think we need to come to terms with.

In the case of word processors and email hosts, these suggestions will sometimes save me from a genuine gaffe or typo, but just as often they are weighing in on questions of style — cutting out words they think are unnecessary, tightening and streamlining. When this happens, I often think: who says? Why are you, invisible Gmail robot, my new authority on English composition? Yes, your way works, and might yield a more efficient word count, but it’s also often less specific, less funny, and definitely less like the way I personally write and think. 

And when so many people I correspond with are using the same email host, with the same helpful robots, I wonder how many of them are taking these suggestions without hesitation, and how it is changing the meaning of what they send. What might the email I just received look like if it hadn’t been “improved” in this way? What would it have said that it doesn’t say now? What aspects of the personality of the writer is it hiding? What nuance of tone and intention am I missing?

In other situations, too, these homogenous modes of thinking and expressing are being subtly (but powerfully) encouraged. Canva is constantly suggesting templates and fixes. QuickBooks is always assuming how things should be categorized. Google Sheets is offering to tell me how to interpret the data. Acrobat always wants to summarize the document for me so I don’t have to read it myself. Nowhere in the digital realm are we allowed to create something without the app itself judging and nudging our style, our usage, our organizational decisions. And every time they suggest how you should’ve written a sentence or how you should have analyzed a piece of data, they are quietly forcing you into a sameness of action and expression, which inevitably encourages a sameness of thought.

It can’t be overstated how new and strange this is. Since the first time some hairy hominid picked up a stick, tools were meant to do what you tell them, not tell you what to do. The physical things that our digital tools were designed to replace – a notebook, a filing cabinet, a whiteboard – have no opinions. There’s no right or wrong way to write on a chalkboard, no grammar check built into a pencil. This allows for an essential ability to do things the wrong way. A tool can be played with, flipped around, mishandled, and misunderstood. I can be used in a way that no one ever thought of before.That allowance for “wrongness”, for idiosyncrasy, is where creativity and originality come from. It is an essential feature, and key reason that our tool use has developed from that stick in ground to the computer I’m typing this on. 

Educators understand this. My wife runs the National Children’s Museum, here in Washington, DC. One of their fundamental axioms is that there is no “wrong” way for a child to interact with one of their exhibits. “The children teach us how to play with it”, they love to say, “not the other way around.” This is because they know that play — free, unguided, and unjudged — is the most important way that small children learn. They gain information by trying things, experimenting, “messing up”, much more efficiently and effectively than they ever would by being shown or lectured at. 

Adults, whether we realize it or not, are the same. Our greatest sparks of innovation come when we flip the table, try to use the wrong end of the screwdriver, plug things in backwards to see what happens. And so what happens when the tool is locked? When it is either impossible to use wrong, or that wrongness would require so many steps and difficulties that doing it labor instead of play? What have we lost when it is easier to just say “OK, fine, do it your way”, giving up on the new, innovative, interesting, and possibly brilliant way that our playful heart would have most liked?    

At the base of it, if the software is making creative choices and we are obeying them, which one of us is the tool?

Next
Next

Hoffman’s Hierarchy of Podcast Needs