In Brief
Generative AI has been a major trend in the world of AI for the past several months.
It is being used to create chatbots that are able to converse using natural language.
Experts are warning of runaway technology due to the impressive state of generative AI.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.
Generative AI has taken the world by storm and is being featured on CBS’s “60 Minutes” and is reportedly on the verge of becoming unstoppable. Experts say this technology is impressive, but it might be even more advanced than is commonly understood.

Researchers from Microsoft and Columbia University have shown that chatbots showed evidence of AGI (artificial general intelligence), or brain-like intelligence. This is a significant development, as we’re believed to be years or even decades away from creating AGI. Researchers organized their research labs to study this AGI idea.
Millière suggests that the chatbot’s reasoning was a high degree of a multistep process. The chatbot improvised a memory within its network to interpret words according to their context. This behavior is similar to how nature repurposes existing capacities for new functions, such as the evolution of feathers for insulation before they were used for flight.
A number of experts have issued warnings regarding the dangers of advanced artificial intelligence, saying it could be very destructive, so much so it could even destroy democracy or humanity as a whole. This group includes Geoffrey Hinton, the “Godfather of AI.” They fear that by building a superhumanly smart AI, “everyone on Earth” will die.
The leaders of leading AI companies believe regulation is necessary to avoid potentially damaging outcomes. Casey Newton, the author of the Platformer newsletter, wrote a piece arguing that coverage should focus on the positive aspects of AI and our hope that AI is the best of us and will solve complex problems rather than constantly concentrate on all the possible negatives, going as far as projecting the destruction of trust and, ultimately, humanity as a whole. In response, some commentators point out the popularity of “tech doomerism”.
AI is being developed to help humans better understand their world. However, many people fear that AI will take over the world because they don’t know how it can affect the world. Demonstrating how in thrall we are to these themes, a new movie is expected this fall, “pitting humanity against the forces of AI in a planet-ravaging war for survival.”
Humans have already tamed more than one potentially destructive force. First humans learned how to harness the benefits of fire while mitigating its dangers. Last century, we learned how to use the power of the atom for good. Now, let’s hope we can do the same thing with AGI before we are burned by the sparks of AGI.
AI has already been used in many areas of day-to-day life, with video games being one of the areas that have benefited from this technology. This technology is being used to level up reinforcement learning, image processing, and procedural content generation techniques. Artificial intelligence is essential to functional augmented reality platforms. This could lead to a booming AR market if the AI explosion happens soon.
Recently, Anthropic announced an expanded “context window” for its Claude bot, that is, the bot can now process and respond to longer text. A context window is like the “memory” of a system for a given analysis or conversation. The window for Claude is now about three times larger than that for ChatGPT.
Google also announced significant upgrades to their Bard chatbot last week, including moving it to the new PaLM 2 large language model. This allows the bot to perform more efficiently.
However, we’ve already started seeing the negative aspects of the AI boom: Amazon, the largest direct employer, will fire thousands of HR professionals globally, and an AI will replace them. The robot will predict which job candidates will perform best in particular departments, leading candidates up to the interview. The robot will only connect at the end to make sure the candidate can join the team and to make sure they haven’t been misled about their technical abilities.
Read more related articles:
- Shutterstock and Getty Images bans AI-generated content over fears of legal challenges
- Levi’s Embraces the Future of Fashion: AI-Generated Models to Supplement Human Models for a Personalized Shopping Experience
- ChatGPT’s Left-Libertarianism Has Implications for the Future of Young Generation
Disclaimer
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.