Something joked about in 21st century media is the fabled technology apocalypse or robot uprising. When the artificial intelligence assistants, Siri and Alexa, came out, people were astonished by their ability to fulfill our requests. While they function well when completing simple tasks, robotic assistants were only effective on a small scale. Recently, new AI capable of performing complex tasks are making news — from making realistic paintings based on existing art styles to writing an essay on whatever topic you need. While that sounds great in theory, these seemingly innocent innovations might bring more problems than solutions.
Whether you are a tech enthusiast or anyone with connection to the internet, chances are you’ve seen some new AI program wowing everyone with its abilities. While innovative, these AI programs aren’t unionizing … yet. But the bigger problem at hand is the near endless possibilities of what these programs can create. Where does it end?
OpenAI, the company behind ChatGPT, said on their website that in order to create the program, “we trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides — the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses.”
In other words, the creators taught the AI how to take the information from its internet-sourced databases and convey it in a way that reads like another human wrote it.
Lensa, a photo-editing app that launched in 2018, rapidly gained popularity when it revealed their “magic avatar” feature in late November. This allowed the app to use a few pictures of the user’s face to produce a variety of different AI-drawn works of art that looked like the user.
Prisma Labs, Inc., the creators of Lensa, said in a Q&A that “Each copy is trained individually, meaning it learns specific features of a user’s appearance, it interacts with at a given time. The training process takes on average 10 minutes and requires an enormous amount of computational power — the machine is making approx. 120 million billion [sic] mathematical operations to analyze a single set of photos.”
While these two AIs are undoubtedly amazing innovations, they’ve also posed issues because they create creator-less content.
It’s a student’s dream for a super-intelligent robot to type up their English paper in a few seconds while they rewatch “Grey’s Anatomy” for the fifth time. Now, with AIs like ChatGPT, that is completely possible. Now that AI can write an essay, how can educators adapt?
“I have already modified my course to have in-person quizzes to avoid the issue altogether,” said Professor Elizabeth Cantalamessa, a philosophy professor at the University of Miami. As both a teacher and a philosopher who focuses on the use of language, Cantalamessa has been observing these AIs and their rise in popularity.
Thankfully, she hasn’t had a student turn in AI-generated work, but she did provide an example of a predicament from a professor from Furman University in Greenville, S. C.
Darren Hick, a philosophy professor at Furman, was reading a student’s submission for a homework assignment and noticed that something felt slightly off. He later took to Facebook, where he explained the situation by saying, “It was perfectly readable — even compelling. [But] to someone familiar with the material, it raised any number of flags.”
The student fessed up when asked about their submission, and no disciplinary action was taken since it’s technically not an act of plagiarism.
“Using a chatbot to complete an essay assignment reveals that the student merely wants to satisfy criteria, rather than develop or hone their own skills and express their own thoughts,” Cantalamessa explains. “I understand, it’s not easy to do either of these things, and it’s costly to have your own thoughts scrutinized by someone else — so it makes sense that we’d seek out a low-cost, quick ‘tool’ for doing work that we either do not value for its own sake or do not feel as if we can complete it with our own skills.”
In Hick’s case, this holds true. His assignment was a 500 word paragraph about horror — relatively simple, and could probably be finished in 30 minutes. But why take the time to knock out that quick task when you could type in a prompt into a website and have it do the work for you in just 30 seconds?
Art of The Real
As previously mentioned, Lensa, ChatGPT’s more artistic cousin, is one of the recent image-based AIs which uses your face as the input. While it seems harmless — who wouldn’t want impressive portraits for $7 — this AI has rattled the art community.
Arianna Nicolás, a UM alumni and character artist for the School of Communications, highlighted some of the issues these AI are making artists face.
“The way that [these AIs] work is that somebody programs them, and they have to learn how to do the art,” said Nicolás. “And they learn how to do the art by taking it from other artists. They ‘learn’ in the way someone else would learn from other artists.”
While Prisma Labs states that Lensa would generate art based on your face, it still needs other works of art to learn from. Like other AIs, it uses the internet’s wide database for “teaching” itself how to “draw.”
While the text bots are causing problems for academic integrity, the image-based AIs come with a different problem: artistic integrity.
“AI art could be really cool, but you need to focus on the people who are already doing art because your prompt could’ve been a commission,” Nicholás says. “Their art is their source of income and an AI learning from their art can hurt their income and shift the focus onto the robot and not them.”
Depending on the medium and complexity of a design, commissioning an artist can have a wide range of costs from around $30 to possibly a few hundred. Now, there are apps and websites where you could generate almost anything for free, with the more complex requests not costing much.
“The definition of art is something that evokes emotion in you,” said Nicholás. “If art is being made by something that doesn’t have any emotions is it even art?”
Cantalamessa has a similar outlook: “With the introduction of photography and digital art, [we reconsidered our] assumptions about originality and creativity, as well as [our] role in artistic value.”
Recently, Microsoft invested a few billion dollars into ChatGPT, so it seems these AIs are here to stay. Does this mean the robot revolution is finally upon us?
Definitely not. While there’s no debating that these inventions are very smart, they’re not perfect. In reference to his student’s botched assignment, Hick said: “[ChatGPT] did say some true things about [the topic], and it knew what the paradox of horror was, but it was just bullsh*tting after that.”
Nicholás brought up another AI imperfection.
“You can look at [the art] and see they aren’t perfect because they always mess up the hands and the feet,” said Nicholás.
On his website, Professor Geoff Sutcliffe in the Computer Science Department at UM, stated that while “right now, [these AIs] seem like a pretty astounding tool,” so did other technology when it was initially invented.
“10 years from now, intelligent search and chat engines will be normal,” said Sutcliffe.
In the cycle of technological innovation, it’s common to look at new innovation with shock and confusion. I’m sure if you showed TikTok to a Victorian child, they would have a heart attack. While these AIs can be a little scary, welcome their use with an open mind while also being responsible with them. To quote Uncle Ben’s advice to a Peter Parker, with great power comes great responsibility.
Do you trust this computer? Because philosophers sure don’t. These programs are causing people to rethink and reevaluate how we view sentience. While Siri and Alexa are classified as AI, the general population seems to be far more trusting of these applications than of other programs. So why exactly are these new AIs sending people into a philosophical spiral?
“I have a view of thought that is essentially social,” Cantalamessa said. “It’s a matter of how others take and treat you, and so it isn’t something that could be specified ‘from the armchair’ or in principle. Instead, ‘sentience’ is revealed through practical doings.”
Which is to say, even though these AIs can “think” and “learn,” they aren’t real beings because they don’t have a form that allows them to exist in the physical world. In other words, they are code in a server somewhere that was given the ability to process information by humans, not humans themselves.
On the flipside, there are some that think this lack of body will allow AI to flourish in a way the human mind cannot, perhaps granting it unlimited intellectual potential.
“Others, such as Paul Humphreys, have suggested that computational methods in science could develop to a point where computers are engaging in ‘autonomous science’ that humans can neither fully comprehend or control,” Cantalamessa said.
How does one assess artificial intelligence, exactly? The Turing Test is usually brought up with this moral quandary, though many see it as unreliable. The test itself only records if a human being can interact with an AI pretending to be a human being — if the human can’t see through the AI’s facade, it passed the test.
The problem with this evaluation is that it doesn’t check to see if the AI is thinking like a human being, only if it can sound like one. For all we know, the AI could be programmed to respond to all questions with “human-sounding” answers rather than it understanding speech as a whole and how to converse.
Professor Sutcliffe uses himself as an example of identifying intelligence.
“Walking from my office to the Smoothie King shop on campus — at each step I have lots of possible movements, forwards, backwards, sideways, no movement, etc.,” said Sutcliffe. “Thus, the problem is exponentially hard, unbounded, and yet I always manage to get Smoothie King — I have heuristics that guide me in a good direction at each step.”
Heuristics is the thought process you use for everything, like how you know not to leave the fork on the plate when you put it in the microwave because you have knowledge from others or past experiences that tell you it’s a bad idea.
With this definition, intelligence isn’t just the ability to think, but the ability to process the thoughts you have and choose those that are logical.
Sure, these AIs can think and make choices, but they still lack that natural core processing. Until an AI develops that ability on their own or bypasses the limitations of their code, they cannot follow the path to consciousness.
words_sal puma. design_ isa márquez
This article was published in Distraction’s Spring 2023 print issue.