Even if you haven鈥檛 tried artificial intelligence tools that can write essays and poems or conjure new images on command, chances are the companies that make your household products are already starting to do so.
Mattel has put the AI image generator DALL-E to work by having it come up with ideas for new Hot Wheels toy cars. Used vehicle seller CarMax is summarizing thousands of customer reviews with the same 鈥済enerative鈥 AI technology that powers the popular chatbot ChatGPT.
Meanwhile, Snapchat is bringing a chatbot to its messaging service. And the grocery delivery company Instacart is integrating ChatGPT to answer customers鈥 food questions.
Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn鈥檛 detailed exactly how it plans to deploy the technology, the move reflects the growing pressure on businesses to harness tools that many of their employees and consumers are already trying on their own.
鈥淲e must embrace the risks,鈥 said Coca-Cola CEO James Quincey in a recent video announcing a partnership with startup OpenAI 鈥 maker of both DALL-E and ChatGPT 鈥 through an alliance led by the consulting firm Bain. 鈥淲e need to embrace those risks intelligently, experiment, build on those experiments, drive scale, but not taking those risks is a hopeless point of view to start from.鈥
Indeed, some AI experts warn that businesses should carefully consider potential harms to customers, society and their own reputations before rushing to embrace ChatGPT and similar products in the workplace.
鈥淚 want people to think deeply before deploying this technology,鈥 said Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by the major tech providers that recently released a set of recommendations for companies producing AI-generated synthetic imagery, audio and other media. 鈥淭hey should play around and tinker, but we should also think, what purpose are these tools serving in the first place?鈥
Some companies have been experimenting with AI for a while. Mattel revealed its use of OpenAI鈥檚 image generator in October as a client of Microsoft, which has a partnership with OpenAI that enables it to integrate its technology into Microsoft鈥檚 cloud computing platform.
But it wasn鈥檛 until the November 30 release of OpenAI鈥檚 ChatGPT, a free public tool, that widespread interest in generative AI tools began seeping into workplaces and executive suites.
鈥淐hatGPT really sort of brought it home how powerful they were,鈥 said Eric Boyd, a Microsoft executive who leads its AI platform. 鈥淭hat鈥檚 changed the conversation in a lot of people鈥檚 minds where they really get it on a deeper level. My kids use it and my parents use it.鈥
There is reason for caution, however. While text generators like ChatGPT and Microsoft鈥檚 Bing chatbot can make the process of writing emails, presentations and marketing pitches faster and easier, they also have a tendency to confidently present misinformation as fact. Image generators trained on a huge trove of digital art and photography have raised copyright concerns from the original creators of those works.
鈥淔or companies that are really in the creative industry, if they want to make sure that they have copyright protection for those models, that鈥檚 still an open question,鈥 said attorney Anna Gressel of the law firm Debevoise & Plimpton, which advises businesses on how to use AI.
A safer use has been thinking of the tools as a brainstorming 鈥渢hought partner鈥 that won鈥檛 produce the final product, Gressel said.
鈥淚t helps create mock ups that then are going to be turned by a human into something that is more concrete,鈥 she said.
And that also helps ensure that humans don鈥檛 get replaced by AI. Forrester analyst Rowan Curran said the tools should speed up some of the 鈥渘itty-gritty鈥 of office tasks 鈥 much like previous innovations such as word processors and spell checkers 鈥 rather than putting people out of work, as some fear.
鈥淯ltimately it鈥檚 part of the workflow,鈥 Curran said. 鈥淚t鈥檚 not like we鈥檙e talking about having a large language model just generate an entire marketing campaign and have that launch without expert senior marketers and all kinds of other controls.鈥
For consumer-facing chatbots getting integrated into smartphone apps, it gets a little trickier, Curran said, with a need for guardrails around technology that can respond to users鈥 questions in unexpected ways.
Public awareness fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to big organizations and have the massive computing power needed to train and operate AI models. Microsoft announced earlier this year it was investing billions more dollars into its partnership with OpenAI, though it also competes with the startup as a direct provider of AI tools.
Google, which pioneered advancements in generative AI but has been cautious about introducing them to the public, is now playing catch up to capture its commercial possibilities including an upcoming Bard chatbot. Facebook parent Meta, another AI research leader, builds similar technology but doesn鈥檛 sell it to businesses in the same way as its big tech peers.
Amazon has taken a more muted tone, but makes its ambitions clear through its partnerships 鈥 most recently an expanded collaboration between its cloud computing division AWS and the startup Hugging Face, maker of a ChatGPT rival called Bloom.
Hugging Face decided to double down on its Amazon partnership after seeing the explosion of demand for generative AI products, said Clement Delangue, the startup鈥檚 co-founder and CEO. But Delangue contrasted his approach with competitors such as OpenAI, which doesn鈥檛 disclose its code and datasets.
Hugging Face hosts a platform that allows developers to share open-source AI models for text, image and audio tools, which can lay the foundation for building different products. That transparency is 鈥渞eally important because that鈥檚 the way for regulators, for example, to understand these models and be able to regulate,鈥 he said.
It is also a way for 鈥渦nderrepresented people to understand where the biases can be (and) how the models have been trained,鈥 so that the bias can be mitigated, Delangue said.
By Matt O鈥檅rien And Haleluya Hadero
THE ASSOCIATED PRESS
Like us on and follow us on .