Site icon Distraction Magazine

Terms & Conditions

Artificial intelligence. It’s everywhere and an unstoppable rise. While news outlets find new reasons to make you fear artificial intelligence, there are several ways, especially for university students, to use it to enhance our lives in a practical way. Nonetheless, even practicality comes with an unsaid rule book, and ethics come into play. Just because you can doesn’t mean you should. So, where do we draw the line between helpful and hurtful?

How the Hell Did We Get Here?

Before we can begin to mention the abuses, it’s important to note what AI is. We hear the term all the time, and because of that, likely nobody has even bothered you with the question “what is AI?” since we all assume we have basic understandings of it.

AI is more than robots and ChatGPT. According to Columbia Engineering, AI revolves around technology development that simulates human abilities and goes beyond what humans can do.

As humans, we inherently believe that no species is better than us, especially not more intelligent. So, how is it that AI can do more than we can, even if it’s created by us? Machine learning. ML is considered a subcategory of AI. It’s the part of a system that recognizes the patterns from data inputted to improve the results it provides.

Think of your YouTube search history. If you are constantly searching for cooking videos, more cooking videos will appear on the home screen without you having to search them. Any program with an “algorithm” — TikTok, Instagram, etc. — that appeals to audience desire comes from the advancements of ML.

In the summer of 1956, computer scientist Alan Turing held a workshop at Dartmouth College sharing his research of what he called at the time “machine intelligence.” Those who helped to organize and attended the Dartmouth Summer Research Project on Artificial Intelligence such as John McCarthy and Arthur Samuel, among several others, are considered the founding fathers of AI. Their initial years of research, known as the “golden years” of theoretical AI are the foundations for all AI and ML today.

 

Class in Session

Over the past year, the University of Miami has taken the potential misuse of AI systems into consideration. While clauses in syllabi regarding plagiarism clearly state that it’s prohibited and will result in school disciplinary action, there is no exclusive approach for using AI in the classroom.

UM computer science professor Ubbo Visser, who has researched AI and robotics for over 25 years, believes that choosing to instill repercussions should be up to the discretion of the professor or department that creates the class. Visser does not believe in bringing higher school authority into the mix for the classes he teaches.

“You are all adults, and it is up to you to learn something or not. You hand in your assignments generated by something, and maybe you get enough points to survive the class. And then what? Did you learn anything? No,” Visser said.

Lewis Walker, a senior majoring in motion pictures, was previously prohibited from using AI in his film classes. However, this semester, he is taking a photoshop class which allows him to use it.

“ChatGPT is one of the craziest search engines and tools I’ve ever seen. You can seriously use it for almost any part of your life. I think when Google fails in one area, then using AI can help,” Walker said.

 

To Be or Not to Be Afraid

Disclaimer: This section of the article mentions sexual misconduct and abuse. Do not continue reading if you are uncomfortable with this topic.

Because AI technologies are gradually becoming a new normal, it’s become more challenging to establish boundaries. Deepfakes — a combination of “deep learning” and “fake” — have accumulated worldwide attention for their harmful uses and damages. They are images or videos manipulated using AI to look like something else.

Chidera Okolie, a Nigerian writer and cybersecurity analyst, published an essay on deepfake use in 2023 for Bridgewater State University entitled “Artificial Intelligence-Altered Videos (Deepfakes), Image-Based Sexual Abuse, and Data Privacy Concerns,” which focuses on how its technology is used for sexual abuses, especially in the pornography industry.

Okolie’s essay mentions several victims of deepfake pornography, including Northern Irish politician Cara Hunter, who was running for the Northern Ireland Assembly in 2022 when a pornographic video of what appeared to be her engaging in oral sex began to circulate the internet.

“I was at a family party, it was my grandmother’s 90th birthday. I was surrounded by family and my phone was just going ding, ding, ding. I remember my cheeks flashing red and thinking: ‘who is this person? did I have sex with this person?’ Two days after the video started doing the rounds, a man stopped me in the street when I was walking by myself and asked for oral sex,” Hunter said.

Are there laws in place to prevent further situations like Hunter’s? Yes, but it has not entirely resolved the issue.

The US Malicious Deep Fake Prohibition Act of 2018 has been criticized for its definition of deepfakes being overbroad, defining them as “any audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.”

So, why have the laws yet to be altered?

Okolie wrote it’s because “a law is only an effective regulator where there is a named perpetrator. In this instance, a law can only protect victims when the creators of the pornographic deepfakes can be easily found. And with the current technological tools that help to generate anonymity in the face of crime, it will be rather difficult to bring the abusers within the ambit of the law.”

 

Where Do We Go from Here?

Let’s face the inevitable: AI is not going anywhere so, in some cases, it could be worth experimenting with.

Duncan MacLellan, a junior economics major, recently took a practical AI course, where he learned more about the future of AI.

“AI could be as intelligent or more intelligent than a human and a major support in research, innovation and learning. In this state, it could be dangerous if not controlled, but also it could significantly improve the quality of our lives,” MacLellan said.

Business technology professor Nina Huang also sees an optimistic future with AI.

“The positive aspects of AI and ML can be multifold — these technologies can augment human capabilities, improve productivity, and possibly enrich humans’ life experiences. For everyday users, AI tools offer entertainment and help with interesting discoveries,” Huang said.

However, Huang acknowledges it’s not all sunshine and rainbows.

“I believe awareness, understanding, and education are important to prevent technological misuse. We need to know what specifically we are dealing with before coming to conclusions or seeking solutions,” Huang said.

Thanks to government action, it’s hard to ignore the harm AI has caused. Nonetheless, you hopefully know what’s inherently right and wrong without a law having to remind you.

 

Playing by the Rules

For those of you who need an actual rule book, guess what? Someone wrote one. UNI — Union Network International — Global Union, representing over 20 million trade skills workers in more than 150 countries, published “Top 10 Principles for Ethical Artificial Intelligence” in October 2023. UNI wrote in the introduction to their principles, “with an urgency of now, UNI calls on all companies and governments to engage with the union movement, to co-create a just transition to a future of decent work. From the design of new technologies, AI and algorithms to the impact on the end-user, ethical and social considerations must be made that put people and planet first.” So, before you dive head first into the world of AI, make sure you know the facts and proceed with caution.

 

words_amanda mohamad. illustration_rachel farinas. design_sal puma & uyanga erdenebayar.

This article was published in Distraction’s Spring 2024 print issue.

 

Follow our Social Media:

Instagram  TikTok  Facebook  LinkedIn

Exit mobile version