Every week there is some new AI thing introduced. The latest is ChatGPT, which people are worried is going to be used by students to write essays. Oh, let’s face it, it probably is – but those essays will be easy to spot because they will be too well written, if the software is any good that is. Why? Because the reality is that most people in highschool or post-secondary suck at writing. I’m not saying that to be mean, it’s a reality. They have few if any life experiences, and very little experience at writing, so of course their work will not exactly win a Pulitzer Prize. Yes, there will be people with exceptional writing, but those people love to write, and would never use an AI to cheat at writing. People who take the humanities seriously won’t use AI either.
But people who use these shortcuts are just cheating – and they are mostly cheating themselves. Cheating themselves from actually learning something instead of letting yet another piece of technology do something for them. There are too many people looking for shortcuts because they can’t handle the work put in front of them. It’s no different in computer science – people who think they can use shortcuts to get things done, but eventually realize they have little or no ability to solve problems and the best they can do programming-wise is HTML (and that’s not even programming).
The saviour might be other software that can detect the use of AI. I mean it’s not super hard. Writing generated by AI will conform to a particular style. There won’t be any form of individualism in the junk these things pump out. After four years in a history program, a person will develop a writing style, something unique to them. That’s the point of writing – to develop the ability to resonate your ideas – to inform, persuade, explain or entertain. AI can’t write based on the intrinsic experience of humans. AI works by formulating tasks as problems based on prediction, and then uses statistical techniques and a profusion of data to make predictions. Good for small bits of writing, not really that great for long-form coherent, interesting text.
The thing with AI is that it is not sentient. It has no clue about the context of the essays it is producing. AI does it’s job by using billions of pieces of data, most of it human generated data. So it writes essays based on a plethora of digital data from many sources… but it’s undoing is the fact that not all the world’s information is digital. There is a lot of information, in many differing languages, from many time periods, that has not been digitized. It also can’t include personal experiences because it doesn’t have them – it’s only an algorithm.
Algorithms still can’t craft a narrative the way a person can, and maybe they never will, which in my book will be a good thing for humans. Because if we can’t document our own experiences, can’t express ourselves in words, we loose one of the characteristics of being human. If you rely on a machine to do your writing, then you are loosing a means of communication. While technology is said to promise society so much, we must be mindful of the cost to humanity. technology was meant to help serve people, yet we increasingly find ourselves subservient to it, and many people don’t even realize it. People have become lazy, their attention fading quickly, their lives reduced to a 6-inch screen and a streaming service. Is it even possible for many to think outside the pale, glowing box?
2 thoughts on “As AI gets smarter, humans get more stupid”
Whilst I don’t disagree with your premise that reliance on technology can make people lazy, with consequently poorer writing, am I wrong in conflating AI with machine learning? “Algorithms still can’t craft a narrative the way a person can, and maybe they never will”: if AI systems are coded to automatically learn in use, would ‘their’ writing style not improve, over time? Cheers, Jon.
Machine learning and AI are strongly connected, in fact ML is really an application of AI. ML is the machine using data to learn. Now while writing style may improve over time, and machines may indeed “learn” have their own style, it won’t be the same as a human writing, as machines don’t have a conscious, i.e. they can’t feel, have emotions, or think outside the box. The writing will end up like something out of academia. Ever read journal articles in a science field? They all read roughly the same (and pretty boring), primarily because a lot of academics (STEM in particular) can’t really write anything exciting and tend to follow a formulaic approach. So ML will do well in these areas (and some experiments have shown as much). Writing something interesting to read? Unlikely anytime soon. For example, I could look out the window and intimately describe the snowy scene I see outside today. A machine first has to realize the scene is snowy, and identify the scene, which realizes on human-input data (thousands or millions of tagged images), then has to try and describe it. Now it may be able to describe it using data it has on how other writers have described snowy scenes, but it won’t be able to describe the scene I’m looking at with any clarity.
And if I turn off the power it doesn’t work at all.
thanks for reading!