Interestingly, growth has been fastest in lower-income countries (OpenAI notes adoption growth rates in the lowest-income nations were >4× those in the highest-income countries by May 2025). These regions often report higher everyday reliance on mobile-first AI utilities (translation, tutoring, coding help), which may explain the gap between emerging and advanced economies on “regular use.”
Net-net for product and policy
• For builders: Make trust a feature: publish evals, show error rates, reduce hallucinations, protect data, and build in recourse (appeals, human-in-the-loop). That’s what moves adoption from “try it” to “rely on it.”
• For communicators/journalists: Don’t just say “AI works”—show where it works and where it fails, and explain safeguards in plain English.
• For policymakers: The global public wants regulation: ~70% say national/international rules are needed; only ~2 in 5 think current laws are adequate. Co-regulation (government + industry) is the preferred model in most countries.
Why this moment is tricky
Trust in “AI companies” has slipped even as usage soars—tech remains broadly trusted, but AI sits at a crossroads. The mismatch between breathtaking capability demos and messy real-world outcomes is the friction point. Close that gap with safety work, transparency, and measurable benefits, and public opinion follows.
Sources:
- Reuters on 100M MAU in Jan 2023; Business Insider on 800M WAU (Oct 2025).
- KPMG/Melbourne “Trust, attitudes and use of AI: A global study 2025” (47 countries; use, trust, domain differences, regulation).
- MITRE–Harris “AI Trends” (U.S. safe/secure 39%, −9 pts since 2022).
- Pew Research (2025) on U.S. public vs. AI experts & rising concern.
- OpenAI blog on adoption growth in low-income countries.