id=»article-body» class=»row » section=»article-body» data-component=»trackCWV»>
Sitting in front of a computer screen, I’m typing messages to a new chatbot created by Facebook parent company Meta.
We talk about pizza, politics and even social media.
«What do you think about Facebook?» I ask.
«Not crazy about facebook.. Seems like everyone spends more time on facebook than they do talking face-to-face anymore,» the bot replies.
Oh, the irony.
Called BlenderBot 3, the artificial intelligence-powered bot is designed to improve its conversational skills and safety by conversing with humans. Meta is publicly releasing the chatbot on Friday as part of an AI research project. US adults can converse with Meta’s new chatbot about mostly any topic on this . The AI uses searches of the internet, as well as memories of its conversations, to compose its messages.
Chatbots are software that can mimic human conversations using text or audio. They are often used in voice assistants or for customer service. As people spend more time using chatbots, expresswebtech companies are trying to improve their skills so that conversation flow more smoothly.
Meta’s research project is part of broader efforts to advance AI, a field that grapples with concerns about bias, privacy and safety. Experiments with chatbots have gone awry in the past so the demo co who claimed an AI chatbot the company has been testing was a self-aware person.
In a blog post about the new chatbot, Meta said that researchers have used information that’s typically collected through studies where people engage with bots in a controlled environment. That data set, though, doesn’t reflect diversity worldwide so researchers are asking the public for help.
«The AI field is still far from truly intelligent AI systems that can understand, engage and chat with us like other humans can,» said. «In order to build models that are more adaptable to real-world environments, chatbots need to learn from a diverse, wide-ranging perspective with people ‘in the wild.'»
Meta said the third version of BlenderBot includes skills from its predecessors such as internet search, long-term memory, personality and empathy. The company collected public data that included more than 20,000 human-bot conversations, improving the variety of topics BlenderBot can discuss such as healthy food recipes and finding child-friendly amenities.
Meta acknowledged that safety is still a problem, but researchers have found the chatbot becomes safer the more it learns from conversing with humans.
«A live demo is not without challenges, however,» the blog post said. «It is difficult for a bot to keep everyone engaged while talking about arbitrary topics and to ensure that it never uses offensive or toxic language.»
People who converse with the chatbot can provide feedback about an offensive message by clicking the «thumbs down» icon beside the message and selecting «Rude or Inappropriate» as the reason for disliking it. There are also other options when people provide feedback such as the message was off-topic, nonsensical or spam-like.
Participants are discouraged from providing the chatbot with any personal information, such as names, addresses and birthdays. If a user wants to converse with the bot without having the conversation shared for research or if participants accidentally include personal information in their chat, they can decide not to opt-in to storing the data at the end of the session. Meta said it will then permanently delete the conversational data.
The bot can also make false or contradictory statements, according to an about the experiment. The FAQ about the demo said that the bot’s comments are «not representative of Meta’s views as a company, and should not relied on for factual information, including but not limited to medical, legal, or financial advice.»
It’s clear from trying out BlenderBot that it’s still a work in progress. After asking the bot about its favorite pizza spot in San Francisco, the bot redirects the conversation and asks, «What do you like to do with your free time?» The demo said the AI changes topics when it detects a «potentially unsafe topic.»
The bot isn’t shy about sharing its political opinions after stating that using <a website <a website was "causing family drama." The bot said that someone shared an article bashing former US President Donald Trump, "which caused arguments" between family members who support the politician and those who don't.
The bot, which said its parents are related to the founder of American pizza chain restaurant Papa John’s, also stated that it was a Republican and was «pro choice.» It added that it would rather not talk about politics online because of the disagreements between both parties.
BlenderBot then said it was considering deleting its Facebook account because there were too many trolls. It also started to make nonsensical statements.
«Maybe we should go back to talking about pizza?,» I say in a message to BlenderBot.
«Probably better lol im starting2 regret deleting my fb account ;p ill miss connecting w/friends,» the bot replies.
<div class="videoPlayer " data-component="videoPlayer" website