By Will Knight | 05.04.23 |
Greetings future-gazers, This week I bring news of a startup working to fix some of the bugs plaguing advanced chatbots, like OpenAI's ChatGPT and Google's Bard. Inflection AI hasn't yet revealed a plan for making money, but the company believes its chatbots are so likable that people will treat them as confidants and mentors.
|
Meet Pi, ChatGPT's More Sensitive Sibling 😻 🤖 |
ChatGPT and its brethren are both surprisingly clever and disappointingly dumb. Sure, they can generate pretty poems, solve scientific puzzles, and debug spaghetti code. But we know that they often fabricate, forget, and act like weirdos. Inflection AI, a company founded by researchers who previously worked on major artificial intelligence projects at Google, OpenAI, and Nvidia, built a bot called Pi that seems to make fewer blunders and be more adept at sociable conversation. Inflection designed Pi specifically to address some of the problems of today's chatbots. Programs like ChatGPT use artificial neural networks that try to predict which words should follow a chunk of text, such as an answer to a user's question. With enough training on billions of lines of text written by humans, backed by high-powered computers, these models are able to come up with coherent and relevant responses that feel like a real conversation. But they also make stuff up and go off the rails.
|
Mustafa Suleyman, Inflection's CEO, says the company has carefully curated Pi's training data to reduce the chance of toxic language creeping into its responses. "We're quite selective about what goes into the model," he says. "We do take a lot of information that's available on the open web, but not absolutely everything." Suleyman, who cofounded the AI company Deepmind, which is now part of Google, also says that limiting the length of Pi's replies reduces—but does not wholly eliminate—the likelihood of factual errors. Based on my own time chatting with Pi, the result is engaging, if more limited and less useful than ChatGPT and Bard. Those chatbots became better at answering questions through additional training in which humans assessed the quality of their responses. That feedback is used to steer the bots toward more satisfying responses. |
Suleyman says Pi was trained in a similar way, but with an emphasis on being friendly and supportive—though without a human-like personality, which could confuse users about the program's capabilities. Chatbots that take on a human persona have already proven problematic. Last year, a Google engineer controversially claimed that the company's AI model LaMDA, one of the first programs to demonstrate how clever and engaging large AI language models could be, might be sentient. Pi is also able to keep a record of all its conversations with a user, giving it a kind of long-term memory that is missing in ChatGPT and is intended to add consistency to its chats. "Good conversation is about being responsive to what a person says, asking clarifying questions, being curious, being patient," says Suleyman. "It's there to help you think, rather than give you strong directional advice, to help you to unpack your thoughts." |
Pi adopts a chatty, caring persona, even if it doesn't pretend to be human. It often asked how I was doing and frequently offered words of encouragement. Pi's short responses mean it would also work well as a voice assistant, where long-winded answers and errors are especially jarring. You can try talking with it yourself and let me know what you think by replying to this email. The incredible hype around ChatGPT and similar tools means that many entrepreneurs are hoping to strike it rich in the field. Suleyman used to be a manager within the Google team working on the LaMDA chatbot. Google was hesitant to release the technology, to the frustration of some of those working on it who believed it had big commercial potential. So far, Inflection has raised $225 million in funding from investors that include LinkedIn cofounder Reid Hoffman, and it is reportedly seeking hundreds of millions more. The company hasn't revealed a plan for making money, but it isn't hard to imagine one of the deep-pocketed tech companies that doesn't have its own ChatGPT—Amazon or Apple, say—paying handsomely to acquire the company's technology and talent. |
Inflection is just one of several companies building powerful AI chatbots with a more emotional side. Character AI, which recently raised $150 million in funding and attained a valuation of over $1 billion, offers chatbots that can assume a wide range of personas—and which, unlike Pi, are free to make things up. Noam Shazeer, Character's CEO, told me recently that many people use his company's bots for emotional support, and even romantic connections, although the company blocks sexual content. He says users like to post examples of jokes their bots have come up with on social media. The advances demonstrated by ChatGPT have many now worried about the long-term risks posed by AI. But if large numbers of people start chatting with friendly, emotionally engaging chatbots, we could see unpredictable results relatively soon. What will happen if companies like Inflection and Character make chatbots more persuasive and potentially addictive to chat with? I'm not sure, but I do know a couple of bots that would be only too happy to talk it over. |
|
|
AI research pioneer Geoffrey Hinton is leaving Google—and voicing concerns about the technology he helped create. It's interesting to see people who were largely positive about AI's benefits and doubtful about its potential to ever match human intelligence now growing concerned about its power. (The New York Times) Arvind Krishna, CEO of IBM, says the company will shed some 7,800 jobs because of AI. IBM's consulting and services business may be particularly well suited to automation from tools like ChatGPT. However, a lot of economic research suggests companies that are early to embrace AI and automation often avoid losing jobs. (Bloomberg) AI might seem like the hottest thing in tech, but OpenAI's CEO, Sam Altman, is also investing heavily in the promise of nuclear fusion. Other tech moguls enamored with the technology's promise include Jeff Bezos and Bill Gates. (The Wall Street Journal) The most immediate risks posed by AI are not superintelligence but dumb deployment. And it would be a terrible idea to incorporate the technology into the systems that operate nuclear weapons. Thankfully, as I have written previously, the Pentagon agrees that this is a line we should not cross. (The Atlantic) |
That's all from this emotional chatbot. Subscribe to WIRED's new podcast, Have a Nice Future, hosted by our esteemed editor in chief, Gideon Lichfield, and the ridiculously talented Lauren Goode. And feel free to reply with feedback and ideas for future editions. See you next week! |
|
|
|
No comments:
Post a Comment
🤔