From Dolls to Social Matchmaking, AI Wants to Get Data in Every Way Possible


In Brief
AI’s popularity is increasing, but trust in it is declining. Concerns about data ownership, usage, and behind-the-scenes are increasing. The real problem is who AI is built for and trained for, not its capabilities.

Artificial intelligence is no longer confined to productivity tools or enterprise software — it’s showing up in fun, viral forms that feel harmless on the surface. From personalized AI dolls to real-time video matchmaking, apps are gamifying interaction while quietly collecting massive amounts of data.
But as these tools grow in popularity, so does unease. Trust in AI is plummeting. While a growing majority of people use AI in some form, a significant portion remain uncomfortable with it. Concerns about who owns their data, how it’s used, and what happens behind the curtain are growing louder.
Experts in the field say the real problem isn’t what AI is capable of — it’s who it’s built for and how it’s trained. Some warn that even if current AI companies appear trustworthy, there’s no telling how that data might be handled if ownership or agendas shift.
The promise of AI isn’t going anywhere. But neither is the growing demand for control, transparency, and ethical alternatives. The question is no longer what AI can do — but who it truly serves.
The Rise of “Casual AI”: Fun on the Surface, Serious Underneath
In the past, interacting with AI usually meant engaging with tools for productivity, search, or automation. But that’s changing fast. Increasingly, AI is arriving in the form of entertainment — seemingly innocent applications that invite users to relax, connect, or play.
This is the era of “casual AI” — a term that fits apps that turn heavy data collection into light-hearted user experiences. AI-generated dolls were a great example. They drew people in with creativity and personal expression, but in reality, those apps were quietly gathering facial data, aesthetic preferences, and behavioral feedback.
All of this helped improve algorithms — not just for image generation, but for advertising, psychological profiling, and beyond.
AI-Powered Matchmaking: The New Frontier
As AI integrates further into everyday life, its next frontier appears to be human connection. Platforms like Amewz.me are exploring AI-powered social matchmaking — spontaneous, camera-on video chats enhanced by real-time filters and effects. It’s fast, dynamic, and feels more authentic than swiping through static profiles. But what makes this innovation possible is also what raises important questions about user data and trust.
Unlike traditional social platforms, these tools can process not just what you say, but how you say it — collecting behavioral cues like voice tone, facial expression, and interaction patterns. Amewz.me, for instance, notes in its privacy policy that user content may be used to improve the platform and train underlying models. While this is increasingly common in AI services, it also brings trust to the forefront of the conversation.
Johanna Cabildo of the Data Guardians Network puts it simply: people don’t just want innovation — they want inclusion and clarity.
Whether it’s matchmaking, AI avatars, or productivity tools, the platforms that prioritize ethical data use and user empowerment are likely to win lasting trust.
The Trust Dilemma: Data Use and Transparency in AI
AI’s growing influence is reshaping industries, from shopping to security, but the key to its future success lies in trust. Despite its widespread use, many people remain uneasy about AI’s role in their lives.
A significant 82% of users interact with AI in some form, but only 57% feel comfortable doing so, revealing a gap between adoption and trust. This unease deepens when it comes to data usage.
What’s more, a staggering 64% of consumers express concern that their data is being collected without consent or transparency.
The real issue, according to Cabildo, isn’t AI’s potential but how it serves its users. As she puts it, “People want to be part of the system, not exploited by it.” As AI systems collect vast amounts of personal information—from habits to preferences—users are left feeling powerless without clear, accessible information on how their data is being used.
Tech ethicist Dr. Kate Crawford adds, “The problem isn’t just how much data is gathered, but who benefits from it, and at what cost.” Crawford’s viewpoint underscores the growing demand for platforms that not only prioritize security but offer users full control over their data.
Consumers today expect transparency and ethical handling of their information—expectations that, if met, can enhance trust and foster greater AI adoption moving forward.
Who Does AI Really Serve?
As the use of AI proliferates, one fundamental question remains: who actually benefits from AI?
On the surface, AI-powered platforms like matchmaking tools or recommendation algorithms promise a more personalized, efficient user experience. However, when you dig deeper, the answer isn’t always so clear. Behind these technologies, there are often complex data pipelines, black-box training models, and a profit-driven agenda that may not align with the needs or interests of the users.
Incentives Unclear
Many AI models are controlled by large companies whose profit motives can shape the very algorithms they deploy. Decisions aren’t just based on lines of code; they’re heavily influenced by incentives like advertising revenue, surveillance capitalism, and corporate shareholders looking for returns on their investments.
As Cabildo points out, trust is precarious: “The real problem isn’t what AI can do — it’s who it serves and what it’s trained on. Even if you trust the current AI firms, you never know who the next shareholders or buyers might be.”
Vague Data Pipelines
Opacity in how data is collected, used, and monetized is a significant issue. Most users have little insight into the data pipelines that feed these AI systems, and even less understanding of how their personal data might be sold or used for unintended purposes.
The shift of ownership through mergers and acquisitions can further muddy the waters, creating a landscape where today’s ethical company might not be tomorrow’s ethical platform.
This highlights a crucial point: if the incentive structure of AI firms is profit-driven rather than user-centric, it becomes harder to know whose interests are being served when we interact with these tools.
Trusting these firms may seem like the most convenient option today, but as Cabildo emphasizes, the question remains: who owns the models, and who profits from our data?
The Alternative: Ethical AI That Serves the People
What if the future of AI could look different? Rather than serving the interests of large corporations or data brokers, an ethical AI would prioritize consent, transparency, and user control.
In this future, people wouldn’t just be passive participants but active stakeholders in the technologies they engage with. This shift could be achieved through decentralized models that put users in charge of their own data and ensure it’s used only with explicit consent.
There are already organizations working to build a more ethical version of AI. The Data Guardians Network (D-GN), according to Cabildo, is one such initiative, focusing on giving individuals control over their data and promoting transparency in how AI models are trained and utilized. “The next era of AI will be built by people with ethical data, real incentives, and platforms that put users first.”, Cabildo notes.
Imagine if platforms like Amewz.me, currently focused on AI-powered matchmaking, could be redesigned with user agency at the center. With clear, ethical data usage, users could trust that their information wasn’t being monetized without their knowledge. This version of AI would be built on mutual trust and responsibility—offering a system where the benefits are shared fairly and everyone wins.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.
More articles

Victoria is a writer on a variety of technology topics including Web3.0, AI and cryptocurrencies. Her extensive experience allows her to write insightful articles for the wider audience.