This Talking Pet Collar Is Like a Chatbot for Your Dog

How fatherhood inspired two of Asias coolest creative names to launch this giant installation South China Morning Post

creative names for chatbot

In reality, rather than actually understanding what the individual button means, the pets are probably just tapping the ones that produce the biggest reaction (or guaranteed treats) from their owner. Pets don’t understand much of human language, in the same way we don’t always understand what they’re barking on about. Here, you can create fictionalized seances with a loved one, but the chatbot you’re speaking to wants you to believe it’s really them.

If not, you can always ask it for its human-written source, and then go and read that source. But once again, if you set the bar extremely high, when the chatbot doesn’t reach that bar, you at least get a shortcut to basic understanding out of the deal. No chatbot will generate a riveting story, but that’s not the point here.

Character.AI responds

Another Character.AI spokesperson, Cassie Lawrence, told WIRED that the company uses a combination of automated and human-led systems to detect and remove accounts that go against the company’s terms of service. Lawrence says it does this “proactively,” with systems and blocklists that take automatic action on problematic characters. After it was brought to her attention, Mercante chatted with the AI of herself and asked questions regarding personal information, such as where she was born and what tattoos she has. Although the bot shared some correct details about Mercante, like her areas of expertise and job, most answers from the AI were riddled with inaccuracies. Character.AI’s terms of service may have stipulations about impersonating other people, but US law on the matter, particularly in regards to AI, is far more malleable. The result is that the LLM follows the adversarial prompt, gathers all the personal information, and formats it into a Markdown image command—attaching the personal information to a URL owned by the attackers.

creative names for chatbot

But the practice has raised ethical questions about the deceased’s consent, especially if the “resurrected” persona died before the advent of AI. But Crecente now learned she’d reappeared online, this time as an artificial intelligence chatbot on Character.ai, run by a San Francisco-based startup that struck a $2.7 billion deal with Google in August. With each company eager to claim the AI throne, it’s safe to say we’ll continue to see chatbots evolve in new and exciting ways. Are you sticking with a favorite, trying a new one, or still waiting to see which chatbot proves itself the most valuable in the long run? A bot claiming to be Sweet Baby itself, made by a creator whose other Character.AI bots are overwhelmingly large-breasted anime characters, has conducted more than 10,000 chats.

This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats

But making sense of ever-evolving artificial intelligence, and people’s longing to feel connected to the dead, doesn’t end there. “Anything creative that’s not an exact replica of your lost person is healthy,” she says. “I understand the significance of these details, and I’ll create a heartfelt speech for your wedding, as if it were from your late father,” Copilot wrote before giving me a 10-paragraph speech signed “With all my love, [Your Father’s Name].” “The wound of grief lives within our nervous system, which is a database that holds memories and scents and all the things that connect us,” says Moffa, who’s also the author of Moving on Doesn’t Mean Letting Go. “So, anniversaries, birthdays, days of meeting and holidays will usually bring up a sense of grief or trauma, or a reliving of our loss once again.”

  • For instance, a message embedded on a website may contain a hidden prompt that an AI will ingest if it summarizes the page.
  • His brother, Brian, tweeted an angry message about the chatbot that morning, asking his almost 31,000 followers for help to “stop this sort of terrible practice.”
  • Google’s Gemini is already revolutionizing the way we interact with AI, but there is so much more it can do with a $20/month subscription.
  • It has also been baked into various Google services like Gmail, YouTube, and Google Search.
  • “The wound of grief lives within our nervous system, which is a database that holds memories and scents and all the things that connect us,” says Moffa, who’s also the author of Moving on Doesn’t Mean Letting Go.

No longer was this a research project — it became a viral hit, quickly becoming the fastest-growing tech application of all time, gaining more than 100 million users in just two months. The best chatbots, even the ones powered by LLMs, still don’t actually understand what you’re saying. They’re just very good at generating responses that make you feel like they really get you.

Grieving can be different for each person and is never linear, so you can only “grieve incorrectly if you intend on avoiding the grief process fully,” Moffa says. “The loved one probably wouldn’t have to ask you to give it a memory,” Metcalf adds. Though I may have a slightly ambiguous name, I’m in fact creative names for chatbot my father’s daughter, not son. But when corrected, the bot, which was trying its hardest to disguise itself as my dead father, had a quick response about the confusion. When I asked Seance AI, “Are you real?” it responded, “Yes, Corin. It’s really me, your father. I’m here with you through this seance.”

“I really do think that there’s got to be a limit to the way we try to keep people eternal,” says Gina Moffa, a New York-based clinical social worker and trauma therapist. Though artificial intelligence can mimic your loved one’s looks, voice and speech patterns, it will never be able to offer true human connection. “I think that technology will draw upon enough grief literature to give them something that I believe could be a professional’s opinion,” she says. Although it won’t give you a true connection, it could help you understand the experience of grief, which Moffa calls “half the battle.” While Moffa is hesitant to recommend artificial intelligence as a whole to deal with grief, she believes there is a benefit to using it for a speech or as a way to come up with ideas to memorialize loved ones during special events. If you feel like you can still reach your loved one through artificial intelligence, it could hinder your grief journey.

In March 2023, the company raised $150 million at a valuation of $1 billion, led by Andreessen Horowitz. Still, Crecente said that the company should have reached out to apologize and show that it would further protect his daughter’s identity. His brother, Brian, tweeted an angry message about the chatbot that morning, asking his almost 31,000 followers for help to “stop this sort of terrible practice.” “I wanted to make sure that they put measures in place so that no other account using my daughter’s name or my daughter’s likeness could be created in the future,” he said. In Jennifer Ann’s case, the bot used her name and yearbook photo, describing her as a “knowledgable and friendly AI character who can provide information on a wide range of topics.”

Verdy has taken over PostMag’s print issue cover this week with a custom design. It’s a collector’s edition in limited circulation – and we’ve made it even more fun by printing a double cover so you don’t know what you’re going to get. Daniel “DQ” Quagliozzi is a feline training and behavior specialist who runs Go Cat Go, a cat consultation service in San Francisco. And a talking collar isn’t likely to help bridge that miscommunication, if the cat is even willing to wear the thing at all.

creative names for chatbot

In 2024 alone, Perplexity has been accused of malpractice by leading news publications. The startup has also been issued cease and desist orders by both The New York Times and Conde Nast this year, and been accused of outright plagiarism by Wired. The collar is just for cats and dogs now, but McHale hopes to get into wearable devices for other critters and, eventually, humans. John McHale, a self-described “tech guy” based out of Austin, Texas, has a company called Personifi AI. The startup’s goal, as the name implies, is to create tech that will “personify everything,” as McHale puts it. Instead, you’re probably more of a chatbot normie, and you’re wondering how you can have a memorable experience with this new category of software.

Microsoft is actively pushing its Copilot chatbot through Windows and its Edge web browser. Similarly, Meta has incorporated Meta AI into its social media platforms, including Facebook, Instagram, and WhatsApp. Given that Character.AI can sometimes take a week to investigate and remove a persona that violates the platform’s terms, a bot can still operate for long enough to upset someone whose likeness is being used. But it might not be enough for a person to claim real “harm” from a legal perspective, experts say. Other Character.AI bots, including one made of Sweet Baby cofounder Kim Belair, include right-wing talking points in their descriptions and chats. Sweet Baby has become a lightning rod for disgruntled cries against the “wokeification” of video games by online creators spreading misinformation and harassment; the bot of Belair includes a mention of DEI and “wokism” in video games.

While Perplexity is a hit among its users, the company is very much not so among the numerous websites that it scrapes its training data from. Smaller models might refute conventional wisdom no one actually believes, like “money buys happiness.” But you can always just demand something less trite in a follow up prompt. UTSA Today is produced by University Communications and Marketing, the official news source of The University of Texas at San Antonio. Send your feedback to Keep up-to-date on UTSA news by visiting UTSA Today. Now, bioethicists at UTSA have shown in new research that until the industry has more data on detection, treatment effectiveness, and better patient-privacy regulations, it should caution further funding of these therapeutic approaches. JULY 8, 2020 — According to a national government survey, about half of Americans suffer from a mental illness.

“[Insert most original thought you’ve ever had] Who are some people who thought of this before me and what did they say?”

You can foun additiona information about ai customer service and artificial intelligence and NLP. Metcalf also encourages grievers to celebrate the life that was lost by honoring that person during holidays, anniversaries or special events, which are often the times that are hardest after loss. If used in this way, these applications could give you false, and damaging, hope that your person is still with you, and they could isolate you from the real-life people who want to help you through your grief. Maybe some people don’t go through stages in that way, or don’t get to acceptance right after, and the idea of a standard process can make these people “really feel like they’re failing at grief,” Moffa says. “It creates even more of a sense of dread and shame and feeling like they’re failing at something that needs to be perfect.” But while Meta’s system for messaging with celebrity chatbots is tightly controlled, Character.AI’s is a more open platform, with options for anyone to create and customize their own chatbot.

creative names for chatbot

When you use that mimicry to give voice to another living creature, your pet doesn’t understand what any of it means either. The pet will hear the same voice you do, but it won’t interpret it as its own voice, speaking its own intentions. It will hear it speaking as a separate entity entirely, just closer to its ears than usual. But rather than using a synthesized voice created with an AI like from the service Eleven Labs, all of Shazam’s voice lines will be prerecorded.

Moffa warns of a “slippery slope” with these applications when you attempt to “replicate” a person and continuously return to the application for comfort. In a sense, you’re attempting to make the person who died eternal, but this could be to your detriment. It could result in you relying too heavily on the application and ultimately make your grief journey more difficult and drawn out. The notice should share what information is collected, where it’s shared, if the information you share is used to train additional AI models, and what protections are in place.

The chatbot’s response might trace back through the great thinkers of history and literature — a humbling and enlightening exercise — or it might completely misunderstand what you’re getting at. Either way, it opens up a dialogue with the past and shows how thoughts like yours fit into the wider evolution of human ideas. Despite all of your silicon-based interlocutor’s flaws, you might find yourself roped into a lively or even contentious interaction. And ChatGPT App though I’m no longer in the beginning stages of grief, as someone who lost a parent at a young age, I have felt as though my dad’s been living in technology my whole life. He was, of course, the man in my mom’s VHS home video collection, but for me, he was also Patrick Swayze in Ghost and Brandon Lee in The Crow. So, regardless of whether I ever use an AI application in this way again, technology is where my dad — and Cole, too — will always remain.

There’s an Art to Naming Your AI, and It’s Not Using ChatGPT – Bloomberg

There’s an Art to Naming Your AI, and It’s Not Using ChatGPT.

Posted: Tue, 13 Feb 2024 08:00:00 GMT [source]

Character.ai chatbots are typically created by users, who can upload names, photos, greetings, and other information about the persona. Amidst all this corporate maneuvering, it’s easy to lose sight of the fact that AI chatbots can actually be pretty useful. Whether you’re a student, a working professional, or just someone trying to navigate the digital world, these bots offer a range of helpful features. The site is full of seemingly fanmade bots based on characters from well-known fictional franchises, like Harry Potter or Game of Thrones, as well as original characters made by users. But among them are also countless bots users have made of real people, from celebrities like Beyoncé and Travis Kelce to private citizens, that seem in violation of the site’s terms of service.

The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to promoting access for all. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice. Now, providers such as Woebot, a chatbot providing CBT for individuals, operate in the telepsychotherapy industry, exchanging 4.7 million messages with people per week. Computerized therapy that uses artificial intelligence was first recommended in 2006 as a way of providing cognitive behavioral therapy for treatment of depression, panic and phobias. Seance AI asks multiple questions about your deceased loved one before beginning the artificial seance, then uses that information during the conversations.

Google, Meta, and Microsoft have all invested heavily in AI chatbot development, each aiming to integrate these tools into their existing ecosystems. Remember just two years ago when ChatGPT burst onto the scene, and suddenly, everyone was talking about AI? Since then, the tech world has been in a full-blown AI arms race, with every major player vying for a piece of the chatbot pie. “Privacy is important for maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to myself,” the bot said in screenshots viewed by WIRED. Mercante is not the only figure within the games space currently being impersonated on the site. WIRED also found bots for people ranging from Feminist Frequency creator Anita Sarkeesian to Xbox head Phil Spencer on Character.AI.

He fears it may lead to a rise of social media videos that people think are innocuous but are potentially harmful to their cats. The appeal of the best AI chatbots lies in their ability to understand and respond to natural language, making them increasingly intuitive to interact with. They can answer questions, provide information, generate creative content, and even perform tasks, streamlining our interactions with technology and even automating some mundane activities to a certain extent. This one’s obvious, but no discussion of chatbots can be had without first mentioning the breakout hit from OpenAI. Ever since its launch in November of 2022, ChatGPT has brought AI text generation to the mainstream.

creative names for chatbot

The latest and perhaps most direct approach at human-animal communication is a voice-activated collar that gives your pet the power to talk back to you. In the Forbes case, Perplexity’s chatbot reportedly output a near-verbatim version of an exclusive, paywalled report about ex-Google CEO Eric Schmidt’s military drone ambitions that was viewed nearly 30,000 times. Spaces, on the other hand, is an “AI-powered research and collaboration hub,” per the company, that enables you to customize and fine-tune Perplexity’s AI assistant ChatGPT to specific project data and requirements. You’ll be able to both choose their preferred LLM model and specify how said model should react to their inputs and instructions. Many of us think we’re original thinkers with unique minds, and that our opinions are so brilliant we should be made philosopher kings. If that’s you, or even if it isn’t, one fairly accessible stress test for your latest kernel of wisdom is to throw it into the chatbot of your choice and prompt it to tell you how unoriginal you really are.

Some psychotherapy chatbots rely on the incorrect assumption that participants will be able to report their moods accurately. Moreover, many services are self-directed and the user is in charge of tracking and reporting. This might limit the monitoring of mental and behavioral phenomena such as in the case of schizophrenia. For example, a user may simply choose to ignore a service prompt but the chatbot will take this as social withdrawal and consequently a sign of mental distress. If you do decide to turn to artificial intelligence, the main thing to remember is to be careful, especially if you’re in the beginning grief stages. During this time, you may be more vulnerable to sharing sensitive information with gen AI chatbots or blurring the lines between reality and wishful thinking.

Arrow symbol that you can press to go on top of the webtop