LESSON PLAN

check
check
check

Lesson plan

Level B2

Navigating Authenticity in the Age of AI and Deepfakes

Lesson overview

In an era of advanced AI, the quest for authenticity is challenged by deepfakes and digital impersonations. A personal experiment revealed the difficulty in distinguishing real voices from AI.

Reading text

READING TEXT

Title: The Search for Realness in the Age of AI: A Personal Experiment

In today's world, artificial intelligence (AI) is becoming more advanced, and this raises important questions about what it means to be 'real.' Recently, I had a conversation with my aunt Eleanor that made me think deeply about this issue. I decided to conduct a little experiment to see if someone who knows me very well could tell the difference between my real voice and a voice created by AI. I was curious to find out how well AI could mimic human speech.

I called Eleanor and casually mentioned that I was writing an article and needed her help. She had no idea that I was going to test her ability to recognize my voice. I told her, 'You might be talking to me or to an AI version of me.' At first, she was doubtful. 'It sounds like you,' she said, but added, 'I think a real person has more emotion in their voice than an AI would.' I appreciated her thoughts but couldn't help but think about how advanced AI technology has become.

As our conversation continued, I could feel her uncertainty growing. 'I was about 90% sure it was you,' she finally said, 'but something felt a bit strange.' This moment of doubt made me think about a larger issue in society: the rise of deepfakes and the potential for AI to trick us. We often hear about the dangers of AI impersonations being used for scams, misinformation, and even political manipulation. But what happens if someone accuses you of being a deepfake? How can you prove that you are real?

This question was recently faced by Israeli Prime Minister Benjamin Netanyahu. He became the center of a conspiracy theory when a video appeared to show him with a glitchy sixth finger, which is often seen as a sign of AI-generated images. The internet exploded with wild rumors, suggesting that he had died in a missile strike and that the Israeli government was hiding the truth. To counter these rumors, Netanyahu posted a video from a coffee shop, showing his hands to prove he had the usual number of fingers. However, many people still believed he was dead, and his attempts to prove he was alive were met with skepticism.

This situation made me wonder: if a world leader struggles to prove his reality, what chance do the rest of us have? I reached out to experts in AI and digital forensics to gain insights into this confusing issue. They all agreed that Netanyahu's videos were real. Jeremy Carrasco, co-founder of Riddance, an independent publication focused on AI-generated media, stated clearly, 'They are all real.' He explained that the supposed sixth finger was just a reflection of light on Netanyahu's hand, a common optical illusion that can happen in videos.

Further analysis showed that current AI technology has difficulty replicating the continuity of sound and visuals in videos. Hany Farid, a digital forensics professor at the University of California, Berkeley, examined the videos and confirmed their authenticity. 'There is no evidence that this is AI-generated,' he stated. Yet, despite this expert validation, many people continued to doubt, highlighting a worrying trend: in a world where AI can create convincing fakes, real evidence can be dismissed as fake.

Reflecting on my own experience, I remembered a recent incident when I shared a link in my family group chat about a Google privacy setting. My excitement was met with immediate suspicion from my mother. 'How do I know this is really you and not a scammer?' she asked, making me think quickly. I eventually mentioned a childhood nickname, which satisfied her curiosity, but it struck me how hard it is to establish trust in our digital communications, especially with those who may not know us well.

This brings us back to the larger implications of AI and deepfakes. As technology improves, the line between reality and fabrication becomes less clear. The phenomenon known as the 'liar's dividend' emerges, where the ability to doubt genuine content becomes a shield for those in power. Politicians can dismiss real evidence by claiming it is a deepfake, while simultaneously creating an environment of distrust that undermines their credibility.

So, what can we do to navigate this confusing landscape? Experts suggest that establishing codewords or secret phrases within families and close friends can help protect against impersonation. This is similar to a digital form of multi-factor authentication, ensuring that when we talk about sensitive topics, we have a way to verify each other's identities. 'My wife and I have a codeword we use for unusual calls,' Farid shared, emphasizing the importance of this simple but effective measure.

As I continued my conversation with Eleanor, I learned that she had already set up a codeword system for her family, without me knowing. She shared stories about how voices can be cloned from social media videos, expressing her worries about the authenticity of our interactions. I laughed at some jokes she read to me, hoping to convince her of my humanity, but even that wasn't enough to completely ease her doubts.

In the end, I had to confess to her that it was really me on the line, not an AI. However, as we hung up, I could sense her lingering uncertainty. 'I can’t be sure,' she said, and that feeling resonated with me. In a world filled with digital impersonations, the search for authenticity feels more challenging than ever.

As we navigate this new reality, it is essential to stay alert and proactive in our communications. The rise of AI brings both challenges and opportunities, but it also requires us to work together to build trust and authenticity in our interactions. So, the next time you find yourself questioning the reality of a conversation, remember: it’s not just about proving you’re real; it’s about creating a culture of trust in an increasingly complex digital world.

Discussion prompts

DISCUSSION PROMPTS

  • 1. What are some ways we can verify the authenticity of information we receive online?
  • 2. How do you feel about the use of AI in creating content? Is it mostly positive or negative?
  • 3. Can you think of a time when you doubted the authenticity of something online? What happened?
  • 4. What measures do you think individuals should take to protect themselves from deepfakes?
  • 5. How can families and friends establish trust in their digital communications?

Key vocabulary

Match each numbered word with the correct lettered definition.

Words

  • 1. authenticity
  • 2. deepfake
  • 3. impersonation
  • 4. skepticism
  • 5. conspiracy
  • 6. optical illusion
  • 7. forensics
  • 8. mimicry
  • 9. safeguard
  • 10. trust

Definitions

  • a. a secret plan by a group to do something unlawful or harmful
  • b. the action of imitating someone or something
  • c. a visual phenomenon that tricks the eye
  • d. firm belief in the reliability or truth of someone or something
  • e. the act of pretending to be someone else
  • f. a synthetic media in which a person’s likeness is replaced with someone else’s likeness
  • g. a measure taken to protect someone or something
  • h. the quality of being real or genuine
  • i. an attitude of doubting the truth of something
  • j. the application of scientific methods to solve crimes

MULTIPLE CHOICE

MULTIPLE CHOICE QUESTIONS

Question 1

What was the purpose of the author's experiment?

  • a) A) To test AI technology
  • b) B) To see if Eleanor could recognize his voice
  • c) C) To write an article
  • d) D) To prove AI is dangerous
Question 2

What did Eleanor initially think about the voice she heard?

  • a) A) It was definitely AI
  • b) B) It sounded real
  • c) C) It was a deepfake
  • d) D) It was a recording
Question 3

What did Netanyahu do to prove he was real?

  • a) A) He showed his ID
  • b) B) He posted a video from a coffee shop
  • c) C) He called the media
  • d) D) He wrote a statement
Question 4

What is the 'liar's dividend'?

  • a) A) A type of investment
  • b) B) A way to prove authenticity
  • c) C) Doubt cast on genuine content
  • d) D) A method of communication
Question 5

What did the author suggest to improve trust in digital communications?

  • a) A) Use codewords
  • b) B) Avoid using technology
  • c) C) Share more personal information
  • d) D) Trust everyone online

TRUE / FALSE

TRUE / FALSE QUESTIONS

Question 1

Eleanor was completely convinced that the voice was real.

  • True
  • False
Question 2

The article discusses the challenges of proving authenticity in the digital age.

  • True
  • False
Question 3

Netanyahu's video was proven to be AI-generated.

  • True
  • False
Question 4

The author believes that AI technology is harmless.

  • True
  • False
Question 5

The author and Eleanor have a codeword system in place.

  • True
  • False

SHORT ANSWER

SHORT ANSWER QUESTIONS

Question 1

What is the main concern regarding AI and deepfakes?

Question 2

How did the author feel about Eleanor's skepticism?

Question 3

What did experts suggest to protect against impersonation?

Question 4

What was the reaction of the internet to Netanyahu's video?

Question 5

What does the author mean by 'building a culture of trust'?

GRAMMAR EXERCISES

GRAMMAR

"Selected Grammar Point: Reported Speech\n\nBrief Explanation: Reported speech is used to convey what someone else has said without quoting them directly. It often involves changes in pronouns, verb tenses, and adverbials of time and place. For example, "She said, 'I am happy'" becomes "She said that she was happy."\n\nExercise Questions:\n\n1. Change the following direct speech into reported speech: \n "I think AI is becoming more advanced," she said. \n ___________________________________________________________\n\n2. Transform this sentence into reported speech: \n "You might be talking to me or to an AI version of me," I told her. \n ___________________________________________________________\n\n3. Correct the errors in the following reported speech: \n He said that he will go to the store yesterday. \n ___________________________________________________________\n\n4. Rewrite the following sentence in reported speech: \n "It sounds like you," she said. \n ___________________________________________________________\n\n5. Change the direct speech into reported speech: \n "I appreciate your thoughts," I said to her. \n ___________________________________________________________\n\nAnswer Key:\n\n1. She said that she thought AI was becoming more advanced. \n (Explanation: The verb tense changes from present to past.)\n\n2. I told her that she might be talking to me or to an AI version of me. \n (Explanation: The modal verb "might" remains the same in reported speech.)\n\n3. He said that he would go to the store the day before. \n (Explanation: "Will" changes to "would" and "yesterday" changes to "the day before.")\n\n4. She said that it sounded like me. \n (Explanation: The verb tense changes from present to past.)\n\n5. I told her that I appreciated her thoughts. \n (Explanation: The verb tense changes from present to past.)"

Answer key

KEY VOCABULARY

  • 1. authenticity → h
  • 2. deepfake → f
  • 3. impersonation → e
  • 4. skepticism → i
  • 5. conspiracy → a
  • 6. optical illusion → c
  • 7. forensics → j
  • 8. mimicry → b
  • 9. safeguard → g
  • 10. trust → d

MULTIPLE CHOICE

  • 1. B) To see if Eleanor could recognize his voice
  • 2. B) It sounded real
  • 3. B) He posted a video from a coffee shop
  • 4. C) Doubt cast on genuine content
  • 5. A) Use codewords

TRUE / FALSE

  • 1. False
  • 2. True
  • 3. False
  • 4. False
  • 5. False

SHORT ANSWER QUESTIONS

  • 1. The main concern is that AI can create convincing fakes, leading to distrust and difficulty in proving authenticity.
  • 2. The author appreciated Eleanor's skepticism but also reflected on the broader societal issues it raised.
  • 3. Experts suggested using codewords or secret phrases within families and close friends.
  • 4. The internet reacted with wild speculation and conspiracy theories about his authenticity.
  • 5. Building a culture of trust means creating an environment where people can rely on the authenticity of communications.

Already have an account? Log in

Continue with a free account

Get full access to our downloadable lesson plans for free

Create Free Account