AI Will Power Next-Gen Scams, Says Wozniak
Apple cofounder Steve Wozniak has expressed concerns over the misuse of AI-powered tools by criminals to create convincing online scams.
Wozniak fears that cybercriminals may start misusing AI-powered tools to create convincing online scams. Wozniak fears that artificial intelligence will fall into the wrong hands and lead to more challenging spot online frauds.
According to a report by Goldman Sachs, the technology is expected to impact an estimated 300 million workplace roles in the coming years, though many of these roles will likely be assisted by AI rather than replaced.
Wozniak has called for regulating AI technology to limit its use by bad agents willing to impersonate others in order to deceive individuals to obtain sensitive information. The use of artificial intelligence technology is growing rapidly, Wozniak says. Businesses are turning to AI-powered tools to automate their processes, improve efficiency, and create new products and services. A host of generative AI tools, including OpenAI’s ChatGPT and Google’s Bard, can converse with humans in a natural, deceptively humanlike fashion.
Wozniak believes that the abuse of AI technology by cybercriminals could be used to create voice clones to trick unsuspecting victims. However, AI could be trained to detect such scams and alert the target to keep them safe.
In March, around 1,000 technology experts signed a letter calling for a six-month pause on the development of some AI tools so that safety guidelines regarding their creation and use could be drawn up. Wozniak was among the signataries. He believes that “bad” tech companies that “get away with anything” should be regulated to keep them within certain boundaries. However, Wozniak also wondered whether such regulation would be effective, stating that “the forces that drive for money usually win out, which is sort of sad.”
It is critical that artificial intelligence technology is regulated to prevent cybercriminals from using it for fraudulent purposes while at the same time ensuring that it is used responsibly and safely.
Five years ago, the advent of artificial intelligence heralded a new era in technology. Today, the emergence of online fraud has formed a new era in technology. The ethics of artificial intelligence have yet to be determined, and there have been numerous frauds involving it.
- Artificial intelligence is a revolutionary tech advancement that could become uncontrollable or be used by hackers. It is possible to create autonomous malware with AI to select and engage targets without human intervention. It is also possible that AI can be used to bolster the capabilities of cybercriminals.
- According to a report by Goldman Sachs, AI technology is expected to impact an estimated 300 million workplace roles in the coming years. However, many of these roles will likely be assisted by AI rather than replaced.
Read more related articles:
- The FTC Warns Companies Against Exaggerating Their AI-Related Statements
- Bitget Launches its Web3 Fund Aimed at Next-Gen Projects and Web3-Friendly VC Firms
- EU Antitrust Chief Vestager Warns of Regulatory Scrutiny on Metaverse and AI
Any data, text, or other content on this page is provided as general market information and not as investment advice. Past performance is not necessarily an indicator of future results.
The Trust Project is a worldwide group of news organizations working to establish transparency standards.