Google Warns Staff Over AI Chatbot Usage
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Google, a prominent supporter of AI, has issued warnings to its own staff regarding the usage of AI chatbots.
The company is taking precautionary measures to ensure the responsible handling of chatbot technology.
Alphabet, Google’s parent company, has advised its employees against inputting confidential materials into AI chatbots. The company has confirmed this information, citing its longstanding policy of prioritizing the protection of sensitive information. Alphabet has also cautioned its engineers against directly using computer code generated by chatbots, as per sources familiar with the matter.
Chatbots, including Bing, Bard, and ChatGPT, are designed to analyze and learn from extensive training data. Human reviewers can access these conversations, and there is a concern that the AI system could inadvertently reproduce the information it has learned, potentially leading to a risk of data leakage. It may surprise users that their conversations with an AI chatbot are recorded by default, and this data is used to train the system.
Google’s concerns highlight its efforts to mitigate any potential negative impact on its business caused by Bard. The company’s warnings also align with the emerging security standard for corporations, which includes advising personnel against using publicly available generative AI technologies. Google further stated its commitment to transparency regarding the limitations of its technology.
In response to a recent report by Politico, Google confirmed to Reuters that it has discussed with Ireland’s Data Protection Commission. The company is actively addressing the inquiries raised by regulators regarding Bard’s impact on privacy. As a result, Google has decided to postpone the launch of Bard in the EU this week. Bard is currently available in 180 countries and territories.
AI chatbots continue to raise privacy concerns within the European Union, and companies are still grappling with understanding the exact requirements placed upon them. Similar issues have been encountered with ChatGPT, resulting in its temporary ban in Italy and ongoing investigations in Germany, France, and Spain.
Other privacy issues surrounding chatbots include inadequate safeguards for minors and the lack of an option to opt out of the data collection processes that fuel these systems.
- Google Unveils Latest AI-Powered Tools: Bard, SGE, PaLM2, And More
- Google Lost $100 Billion by Showing a Raw Version of the Bard Chatbot
- Japan’s Privacy Watchdog Issues Warning to OpenAI
- SEC drops the hammer on chipmaker NVIDIA over obscuring impact of crypto mining
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.