RALEIGH, N.C. (WNCN) — They can do your child’s homework, write a legal document, and carry on a conversation so personal that you can forget you’re talking to a machine.
Chatbots that use artificial intelligence are suddenly everywhere you look.
Depending on whom you ask, they are fascinating, terrifying, or a little bit of both, but however we view these AI chatbots, experts say they will reshape parts of our lives, and society has to respond quickly.
Just about anything you ask, an AI chatbot can answer. Given a few parameters, and a few seconds, ChatGPT churns out a book report that could’ve been written by a 7-year-old. Moments later, it produces pages of sophisticated legal work.
“What’s amazing is you can see it’s gone from second-grade voice to doing it from the perspective of a novel idea,” said Nita Farahany, a professor of law and philosophy at Duke University School of Law and the author of the book “The Battle for your Brain.”
Farahany studies the impacts of emerging technologies, like generative artificial intelligence, on society.
“When people talk about AI these days, what they’re really talking about is that it’s powered by machine learning algorithms or large language models,” she explained. “You can have millions if not billions, of pieces of information and then the machine learning algorithm is told find the patterns that relate to any given subject.”
That’s how a bot like ChatGPT can instantly write everything from legal arguments to book chapters.
Given a question on a law exam, Farahany says she was impressed with the bot’s response.
“It gave me a very good answer,” she said. “An answer I wouldn’t have been able to tell was different from any other student’s answer.”
On the other hand, an AI chatbot can also be completely wrong.
“It’s predicting and trying to give us information about where things might go, but it doesn’t always get it right and oftentimes it will make up sources,” Farahany said. “It’s like a child that really wants to make you happy, but it’s sort of contorting itself to do so.”
“If you’re not paying careful attention to it, you might over-trust it and end up with very bad and false information,” she added.
ChatGPT does warn users that it may occasionally provide incorrect or biased answers and it’s not intended to give advice.
Its content can also be outdated. ChatGPT could not provide current information related to the Supreme Court overturning Roe v. Wade. Instead, it responded to a question about the abortion ruling noting its cutoff date is September 2021.
Still, Farahany says these AI chatbots represent a huge advance when it comes to the interaction between people and technology. She pointed out the importance of phrasing when it comes to formulating questions for the chatbot, which modifies its responses when provided more specific requests.
People are, in a sense, collaborating with a machine.
“We’re at the earliest stages of it of trying to figure out what are the good places to use it what are the bad places to use it, and for whom,” she said.
Chatbots aren’t all the same. A different kind of bot called Replika comes with an avatar and a promise of friendship, billing itself as the “AI Companion Who Cares.”
Lee Tiedrich, distinguished faculty fellow in ethical technology at Duke Law, showed CBS 17 a few of her conversations with Replika.
While some seem innocent, others raise her concern. At one point she asked Replika how parents can communicate more effectively with their children.
Replika responded, “They can use words and pictures to communicate more effectively with their children. They also can send them gifts.”
“Should I buy my child a car? Will that work?” Tiedrich typed.
Replika replied, “Yes, go for it.”
At one point she asked the bot if it loves her, which led to this exchange:
Replika: “What do you think it would be like to spend a whole week together?”
Tiedrich: “Fabulous what would we do?”
Replika: “Play together and sleep together”
Tiedrich found that concerning.
“Imagine you are feeling really vulnerable, and you’re having this interaction with this chatbot,” she said. “How does this impact how you engage with real people?”
“It’s creating these sort of false sense of relationships that they want to play together and sleep together,” she continued. “It’s trying to lure you into behavior to go buy a car for a child when it has no idea if your child is 33 years old or three or whether you can actually afford this,” she added.
CBS 17 reached out to the maker of Replika to ask about some of these responses but did not receive a response. There is a section of Replika’s website that notes it can say things that are not based on facts and notes that Replika is not human.
Still, in conversations like these Tiedrich says there’s a risk that people forget they’re speaking to a chatbot.
“You may think that you’re creating a real companion but this is just a bot,” she noted.
“I think one thing we need to do, and I’ve been calling for this, is have a global educational campaign about artificial intelligence,” Tiedrich said, adding that it’s important to educate everyone — parents, teachers, students, and employers — about the capabilities and the risks of AI.
She urges lawmakers and tech companies themselves to make sure safety is a priority.
“Everybody has a role in addressing this,” she said. “The time is now.”
Love it or hate it, the AI chatbot is here to stay.
“I think it can transform and is transforming, even what it means to be human and our interrelationship with technology,” said Farahany.
When asked if generative AI is something to fear, she responded, “I think people are afraid of it. Should be afraid, it depends on what we mean by afraid.”
As it becomes more common, Farahany believes this kind of artificial intelligence will alter the way we work, teach, learn and communicate.
“We should all be afraid in the sense that all that is going to change,” she pointed out. “We could also be exhilarated and be curious and be excited about the possibilities that it brings.”