ChatGPT aims to help find answers without Google, but some raised concerns
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Two days ago, the American corporation OpenAI introduced a unique chat bot ChatGPT. It uses a GPT-3.5 model, which makes the chatbot capable of conducting a dialogue with strangers, answering questions, writing poetry, scripts, essays, songs, notes, and sonnets, and also explaining something and helping with finding bugs in the code.
The prospective uses of ChatGPT also became clear to many users; it can take the role of the Google search engine.
Asking ChatGPT about the flaws and strengths of certain Pokemon, for instance, will result in a prompt response that provides all the necessary information. Google, on the other hand, will just provide a list of links, none of which guarantee that you will actually find the information you are looking for. At the same time, AI that is so knowledgeable causes some concern.
Many users think that ChatGPT can be used for nefarious purposes, for instance, by terrorists who inquire about how to build a weapon or how to put together homemade explosives. OpenAI says it is attempting to resolve this. According to them, the model is devoid of any racial bias, religious declarations, and messages endorsing immoral or unlawful behavior.
In addition, ChatGPT cites knowledge base limits as the reason it cannot discuss recent global events. It’s possible that this is done to prevent the bot from becoming politically biased.
Anyone can chat with the ChatGPT bot here.
Read more about AI:
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.