Ethical Treatment of AI and Digital Perception of Self

Ethical Treatment of AI and Digital Perception of Self

How AI is changing our cultural perception of consciousness.

(From the Machine | SOURCE: Adobe Stock Photo)

You probably read the title of this article and thought, “There’s no way this lunatic thinks AI has feelings.” Or perhaps “AI is just a tool, why should we treat it any differently than a toothbrush?”

To clarify, my message is not some new-age, Silicon Valley rant on how AI has “gained sentience.” Nor am I a puppet of Roko’s Basilisk, a thought experiment proposing that an AI (the basilisk) may gain control of mankind, and that it would punish all those who opposed its creation.

The thing which we call “artificial intelligence,” is merely a collection of ones and zeros. That being said, AI is a tool that has latched itself to the fabric of society, and its implications must be understood.

In September 2024, I started an online tutoring job teaching Korean students about American language and culture. The company, NaoNow, was amazing and not only cared about the wellbeing of its students, but also made me a more effective mentor.

I was struck by the company’s workspace AI integration. NaoNow used AI to familiarize mentors with the company’s learning platform, answer questions, and match mentors to students. It even asked mentors to generate public bios using ChatGPT.

I, like most people, had futzed around with AI platforms like ChatGPT back when they were first released. At the time, I was neither particularly impressed by AI’s ability to generate images, nor blown away by its proficiency in writing. Why anyone at the time was using it to generate essays still baffles me.

Since then, AI has improved and I have changed my mind.

AI has reached a frighteningly human level of proficiency with writing and image generation. When NaoNow asked me to generate my “Mentor Bio,” for example, I input a prompt with basic information about my professional skills and a couple of hobbies. I then asked ChatGPT to make my bio sound exciting. The AI created a flawless bio that neither repeated itself nor added information I did not provide. Its ability to capture an emotion fascinated me even more.

The paragraph not only conveyed the requested information, but presented it in a way that made me feel excited to book lessons with myself. When I asked it to make some minor edits, the AI was indistinguishable from a human.

This brings me to my first point. AI is here to stay, and companies are already using it to manage business transactions and improve workflows. Although this reality introduces a new set of risks, it is not without benefits. 

AI can complete many jobs that humans should not morally work. Facebook, for example, uses AI to identify and remove pornographic images and posts from their platform. Personally, the thought of scrolling through hundreds of sensitive images to determine which ones “violate community standards” is not appealing, and I am happy to let AI take the wheel.

That being said, a broader and more immediate question arises in the face of professional AI integration. Having used AI in the workspace, I can confidently say that I forgot that the chatbot wasn’t human some mornings. The question then occurred to me — How is interacting with these AI chatbots affecting our cultural perception of consciousness? 

To address this question, we need to examine what makes AI seem human. The explanation is simple yet complicated to replicate. A large language model (LLM) is a probability machine for language generation. Engineers input human writing samples, and the AI evaluates word frequency and position. Initially, the AI responds incoherently. Researchers counteract incoherency by rewarding the AI based on emphasizing the true elements and providing more writing samples. This is called “reinforcement learning from human feedback” (RLHF). Over time, the LLM gains autonomy, allowing it to self-assess and eliminate inaccuracies. An LLM becomes an AI when this learning process is self-sufficient, enabling it to reward itself for correct answers. 

If you didn’t take anything from the paragraph above, the main point is that AI learns from all human writing ever uploaded to the internet. AI portrays a representation of humanity that is more human than photography, and has the potential to predict our thoughts faster than we can think them.

So, what does this mean for us culturally? Are there any other examples of technologies that replicate the human image or psyche? Yes — the most prevalent cultural example of this phenomenon is the “selfie,” a photograph of my “self” as others “view” me. 

When phones with user-facing cameras became popular in the mid-2000s, the world began to experience a cultural shift into individualism. Through images of ourselves, we became more addicted to our self-image than ever before. 

In a recent article from Psychology Today, Dr. Gärdenfors of Lund University explains that the third-person experience of yourself you get from a selfie is mixed with how you experience yourself internally. He argues that iPhone users often subconsciously alter their self-perceptions through phone use. Gärdenfors claims that we fall prey to the Narcissus complex because we become overly absorbed in this dopamine-inducing habit. 

The same phenomenon applies to AI. If you’ve seen movies like Blade Runner, Her, Ex Machina, or Age of Ultron, you are likely familiar with the concept that characters in sci-fi often develop personal and emotional relationships with AIs. In these films, the AIs often help the main character come to conclusions about himself or the world around him. This assistance can come through either rote information or a call for self-examination. 

This process is not dissimilar from taking a selfie. We take selfies, examine them, delete the ones that show our pimples, and post the best selfies on Instagram or Snapchat. 

In a sense, the cultural phenomenons of “the AI” or “the selfie,” point to our innate desire to understand ourselves based on building self-perception. That being said, there is a crucial difference between these two methods of “self-examination.” 

When we look at a selfie, we simply examine ourselves from the outside. Although inspecting our strange, somewhat misshapen noses may disturb us occasionally, this is fundamentally an exterior view of ourselves. By using AI, however, we self-examine by posing sensitive questions to a “caring” listener, in real time, without needing the input of other humans. 

Although it may seem like sci-fi, this reality is here and now. AI chatbots not only complete tasks but also provide conversational outlets for millions of people throughout the world. When I asked ChatGPT for a percentage of users who ask about its consciousness, ChatGPT replied: “around 10-15% of all interactions.”

If we innately desire to examine ourselves and AI provides an addictive means for self-examination, then it follows that our culture will increasingly interact with AI in the same way that we interact with “the selfie.”

Again, we are living in this reality, here and now. According to the major AI consulting firm Master of Code Global:

  • 1.4 billion people actively use AI messaging apps. Chatbots experienced a remarkable 92% increase in usage since 2019. 

  • 40% of millennials engage with digital assistants daily. 

  • On average, users pose 4 inquiries to chatbots within one chat session. 

  • 62% of respondents prefer engaging with client service digital assistants rather than waiting for human agents. 

  • 52% prioritize bots’ personalities over their issue-solving abilities.

Crazy and dystopian though it may sound, we are already seeing AI’s effects on users in the United States. Psychology Today warns “There is a risk that individuals may become overly reliant on chatbots for their mental health needs, potentially neglecting the importance of seeking professional help. Chatbots are not equipped to diagnose or treat severe mental health conditions, and relying solely on them could lead to missed diagnoses and inadequate treatment.”

In a New York Times article released in October 2024, Kevin Roose unpacked the tragic story of Sewell Setzer who took his life shortly after beginning a texting relationship with Character-AI. Although Sewell’s story is uncommon and it is unclear whether or not he took his life because of his interactions with the chatbot, his story of retreating into his phone is common enough among teenagers today. 

With Gen Z spending an average of 6 hours and 5 minutes on their phones every day during 2024, we are seeing less of each other and retreating into ourselves more than any other human civilization in history.

Hundreds of anecdotes and scientific studies illustrate the importance of human relationships. Although some news sources are calling Character AI and mental health bots “the cure to male loneliness” and “a significant advancement in mental health,” this couldn’t be farther from the truth. 

AI will likely drive us farther into our phones than ever before. With its near-perfect replication of human speech and reason, it will also foster a social rift between humans. Those who use it will compare daily interactions with others to conversations in a shadow world.

The danger is not that AI will take over our infrastructure; in fact, the threat is the very present reality that we will retreat into it, wholly and voluntarily, without even a thought for our own bodies.

Previous
Previous

How DOGE Could Impact Virginia Elections This Fall

Next
Next

Challenges to the Proposed Institutional History Museum