Microsoft is trying to push Bing into the future with OpenAI technology
The new version of Bing is designed to allow users to type queries in conversational language and receive both traditional search results as well as answers to questions on the same page. It will use a new “generation” of an artificial intelligence model debuted by OpenAI, the company that released popular chat bot ChatGPT.
The underlying technology that powers the new Bing will be more powerful than the core of ChatGPT, Microsoft executive Yusuf Mehdi said at an event at the company’s headquarters here Tuesday. Bing will have a “search” function and a “chat” function built into its homepage.
Already there is an artificial intelligence arms race between the tech giants, which was reignited by the recent arrival of ChatGPT — an AI system can answer questions and generate human-like text, such as marketing copy or student essays. It became an instant hit with people even outside the tech industry.
It’s a splashy move for Microsoft, which has for years remained a stalwart of business software and cloud computing, but hasn’t dominated in consumer-facing products such as social media. The company made a major investment in ChatGPT’s developer last month, and had previously incorporated its technology into other Microsoft services, such as workplace chat service Teams.
Microsoft will also incorporate the AI function into its web browser, Edge, so it can be used to pull information and answer questions while users browse different webpages. The new version of Bing has example queries online now. A select group of people will get access to the full version starting Tuesday.
According to Microsoft, users of the remodeled search engine will be able to ask questions in a more natural way. And when it gives answers, alongside the traditional list of results is a box that tries to answer questions in a conversational way. Users can also ask follow up questions to refine the answer – or even ask it to do something creative with the information like turn it into a poem.
ChatGPT burst into public consciousness at the end of November and has already dazzled millions. Early adopters have used the text tool to write school essays and professional emails, to explain physics, and to spin up movie scripts, typing in random prompts to test the limits of its abilities.
The AI system is able to interpret a user’s question and generate human-like responses — language capabilities it developed by ingesting vast amounts of text scraped from the internet and finding patterns between words. The system’s developers, the San Francisco-based research lab OpenAI, built the chatbot by fine-tuning one of its older models, called GPT-3.5. Using feedback from human contractors, OpenAI finessed ChatGPT so that responses were more accurate, less offensive, and sounded more natural. Still, users found that ChatGPT sometimes confidently delivers inaccurate answers, spouts nonsense, repeats harmful racial bias, and can be manipulated to violate its own safety rules.
Microsoft said it spent significant resources trying to make the model safer, including working with OpenAI as an adversarial user to try to find potential problems in the system, as well as training the AI model to police itself by rooting out biases, in part by teaching the system to recognize offensive content and, therefore, ideally avoid it.
Both ChatGPT and GPT-3.5 are known as large language models, so-called for the massive amount of data they require. These models are part of a new wave of AI, including text-to-image generators DALL-E 2, which allow users to interact with the system using conversational English—no technical skills necessary. All have raised similar safety issues around misinformation and racial and gender bias.
Geoffrey A. Fowler contributed to this report.