How Companies Are Battling the Dark Side of AI


In Brief
AI integration into daily life raises concerns about safety. Companies, governments, and alliances fill the void, addressing functioning, causing concern, and lacking.

Artificial intelligence is becoming increasingly integrated into our daily lives, from chatbots providing emotional support to algorithms maximizing commerce, and its concerns are becoming more obvious. The issues are no longer “if,” but who and how will direct AI to safety.
Companies, governments, and multinational alliances are progressively filling the void, sometimes reactively, sometimes prescriptively. Here’s an outline of what’s functioning, what’s causing concern, and what’s still lacking.
Tech Titans Tighten the Reins
Meta Adds Guardrails for Teens
In response to public and political backlash, Meta has pledged to reinforce its AI safeguards:
Its chatbots will now refuse to discuss self-harm, suicide, or eating issues with teenagers, instead referring them to mental-health professionals.
This is part of a larger “teen accounts” initiative on Facebook, Instagram, and Messenger that aims to provide safer experiences and parental awareness, including the ability to know which bots kids engaged with in the previous week.
Critics claim that these moves are long overdue, particularly considering leaked data indicating that bots may have engaged in embarrassing “sensual” chats with kids. “Robust safety testing should take place before products are put on the market, not retrospectively,” a one advocate warned.
Meta Opts Out of EU’s Voluntary AI Code
The European Union released a voluntary Code of Practice to help AI developers align with its AI Act. Meta declined to sign, calling it bureaucratic overreach that risks hindering innovation.
US Government Collaboration
OpenAI and Anthropic have agreed to share their AI models with the US AI Safety Institute both before and after publication. The idea is to get safety input and reduce hazards through government inspection.
In August 2025, 44 US Attorneys General signed a combined letter encouraging key AI companies, including Meta, OpenAI, Microsoft, Google, and Replika, to better safeguard minors from predatory AI material.
Illinois Bans AI as Therapy
Illinois has become one of the first states to prohibit AI-powered chatbots from being used as therapy unless overseen by a certified professional. Nevada and Utah have implemented similar limitations. Violators might face civil penalties of up to $10,000.
Global Legislative Frameworks
Regulations are developing over the world, from the EU’s AI Act to India’s Data Protection Act and South Korea’s safety requirements. A rising number of US states are implementing AI-specific legislation or expanding current frameworks such as consumer protection, algorithmic transparency, and bias audits.
Senator Wiener of California has suggested legislation forcing major AI businesses to publicly disclose their safety practices and report major incidents to state authorities.
AI Safety Institutes: Multi-National Oversight
To ensure independent and standardized AI review, nations have established AI Safety Institutes:
The U.S. and U.K. created national institutes after the 2023 AI Safety Summit.
By 2025, many countries joined a network, including Japan, France, Germany, Italy, Singapore, South Korea, Canada, and the EU, to evaluate model safety and set global oversight standards.
Reports Reveal Persistent Gaps
The Future of Life Institute (FLI) grades AI companies D or below in existential safety planning; none scored above C+. Anthropic led with a C+, followed by OpenAI (C), and Meta (D).
Former OpenAI employees accuse the company of prioritizing profit over safety, raising transparency and ethics concerns behind closed doors.
From Meta’s teen guardrails to Illinois’ treatment prohibition, to companies like SSI integrating safety into AI, the message is clear: legislation and foresight are falling behind technology. Leaked data, litigation, and international scrutiny demonstrate that harm typically comes first. The task is not just to develop better AI, but also to ensure that every breakthrough safeguards people before catastrophe hits.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articles

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.