ChatGPT, Gemini and the Year Consumer AI Apps Became Default
Three years after the November 2022 launch of ChatGPT, the consumer artificial-intelligence application has matured from a novelty into a default category. The OpenAI mobile application reached more than two hundred million weekly active users by the end of 2024. Google’s Gemini, available both as a standalone application and as an integrated feature across Android and the Google application family, reports comparably large active-user counts. Anthropic’s Claude has built a smaller but engaged user base centred on technical and professional use cases. xAI’s Grok, available through the X platform and as a standalone application, has grown rapidly. Microsoft’s Copilot has been deeply integrated into Windows, Office and Edge, reaching hundreds of millions of users through bundled distribution. For tens of millions of consumers, a generative-AI assistant has become as routine a daily tool as a search engine or a calculator.
The ChatGPT Launch That Restructured the Industry
OpenAI’s release of ChatGPT on 30 November 2022 was widely characterised as a research preview rather than a product launch. The application reached one million users within five days and one hundred million users within two months, becoming the fastest-growing consumer application in internet history at the time. The pace of adoption surprised OpenAI’s leadership, including Sam Altman, who stated publicly in subsequent interviews that the company had not anticipated the volume of early traffic.
The launch’s most immediate strategic effect was the acceleration of generative-AI product development across the technology industry. Google declared a “code red” within weeks and reorganised significant resources to accelerate the development of Bard, the predecessor to Gemini. Microsoft’s existing partnership with OpenAI deepened, with the company integrating GPT-based capabilities into Bing, Office and Windows. Meta, Anthropic, Amazon and many smaller competitors launched their own consumer-facing AI products in the following eighteen months. The category went from speculative to mainstream within a quarter.
The Mobile Application Strategy
OpenAI launched the ChatGPT mobile application for iOS in May 2023 and for Android in July 2023. The mobile applications added several features beyond the web version, including voice conversations, image upload and analysis, and integration with the device’s camera. The voice feature in particular has been described as transformative for the assistant category. Users can carry on extended spoken conversations with the assistant in dozens of languages, with the natural-feeling vocal performance that emerged from OpenAI’s research on speech synthesis.
The mobile application’s design emphasised simplicity. Unlike many enterprise AI products, ChatGPT’s mobile interface presents a single conversation surface without complex tabs, sidebars or feature menus. The design choice has been credited with the application’s accessibility to non-technical users — for many, ChatGPT was the first AI product they had used, and the conversational interface lowered the cognitive barrier that more elaborate user interfaces would have presented.
The Subscription Tiers
OpenAI introduced ChatGPT Plus in February 2023, a paid subscription that offered access to higher-quality models, faster response times and reduced rate limits. The product reached substantial subscriber numbers within months. Subsequent tiers — ChatGPT Team for small businesses, ChatGPT Enterprise for larger organisations, and the higher-priced ChatGPT Pro for power users — expanded the company’s revenue model significantly.
The subscription business produces substantial revenue independently of the API-licensing business that monetises OpenAI’s models through third-party applications. The combined revenue allowed OpenAI to reach annualised revenue rates exceeding several billion dollars within two years of ChatGPT’s launch, though the company’s overall financial structure remains complex due to its hybrid for-profit, capped-profit governance arrangement and its substantial compute-cost obligations to Microsoft Azure.
Gemini and the Google Integration
Google’s response to ChatGPT has been to integrate generative AI across the company’s existing product surfaces. The Gemini application — successor to the earlier Bard product — is available as a standalone mobile application and is integrated into Google Search through AI Overviews, into Workspace applications, into the Pixel phone series and into the broader Android ecosystem. The integration strategy reflects Google’s view that AI should be present wherever users are already operating rather than confined to a separate product.
The strategy has produced trade-offs. The breadth of integration has reached enormous user numbers but has also concentrated the AI experience around use cases where Google’s existing products already dominate. ChatGPT has been described by some users as a more deliberately “AI-first” product, with a clearer identity than Gemini’s diffused presence across multiple Google surfaces. Whether the breadth or depth strategy will prevail commercially remains an open question that successive product cycles will continue to test.
Anthropic’s Claude and the Safety Frame
Anthropic’s Claude has positioned itself as a more deliberately safety-focused AI assistant. The product is built on Anthropic’s Constitutional AI methodology, which involves training models against a set of explicit behavioural principles. The mobile and web applications have grown a substantial user base, particularly among technical professionals, writers and users with extended document-analysis needs.
The company’s emphasis on safety research has produced both commercial advantages and limitations. Some enterprise customers have chosen Claude specifically for the safety positioning, particularly in regulated industries. The same positioning has also produced more conservative model behaviour in certain creative-writing and edge-case scenarios, which some users perceive as a limitation. Anthropic’s challenge has been balancing the safety frame with commercial competitiveness across the breadth of consumer use cases.
The Use Cases That Have Stabilised
Three years of consumer use have clarified which AI-application use cases have proven durable. Writing assistance — drafting emails, polishing documents, generating outlines for longer pieces — is the most consistent and widely used category. Programming assistance — generating, explaining and debugging code — has become essential for many software developers. Research and synthesis — summarising long documents, comparing options, exploring unfamiliar topics — has become a substantial use case for students, professionals and curious users.
Image generation has been a significant additional category, with applications including ChatGPT (via DALL-E and the newer image models), Midjourney, Google’s ImageFX, and a range of competitor products. The use cases include marketing collateral, social-media content, design exploration and casual creative play. The category has produced its own controversies around copyright, attribution and the labour implications of AI-generated imagery.
The Limitations That Persist
The consumer-AI category has also clarified its limitations. Hallucination — the production of confident but incorrect information — remains a persistent issue across all major models, with implications for use cases where accuracy is essential. Tasks requiring multi-step reasoning or up-to-date factual knowledge frequently produce errors that the models present with the same confidence as accurate responses. Users have developed informal practices for verifying AI output against authoritative sources, but the practices have not been widely formalised.
Privacy considerations have produced their own constraints. Several countries have introduced regulations restricting how AI applications can be used with personal or sensitive information. Healthcare, financial and legal use cases have particular constraints. The European Union’s AI Act, which entered into force in 2024 and is being progressively implemented, has produced compliance work for all major providers.
The Mobile Form Factor and Voice
The mobile application’s voice interface has been particularly significant for the category. ChatGPT’s voice mode, expanded substantially in 2024 and again in 2025, allows users to conduct extended spoken conversations with the assistant in dozens of languages, with natural-sounding vocal responses. The feature has been described by users as a fundamental shift in how they interact with AI — text-based chat is a deliberate, considered interaction, while voice conversations resemble more casual dialogue.
The voice interface has particular implications for users with limited reading or typing ability, for users in contexts where typing is impractical (driving, cooking, exercising) and for older users who may have struggled with text-based AI products. The strategic significance is that voice-driven AI may produce different competitive dynamics than text-driven AI, with implications for hardware, distribution and partnership strategies that are still being worked out.
The Enterprise Adoption
Alongside the consumer-application growth, enterprise adoption of AI products has expanded substantially. ChatGPT Enterprise, Microsoft Copilot, Google Workspace AI features and various specialised enterprise products have been deployed across hundreds of thousands of organisations. The use cases include customer support, sales operations, software engineering, marketing operations and document analysis.
The enterprise adoption has been faster than many earlier technology cycles. Many organisations that had not adopted earlier cloud or mobile technologies have implemented AI products quickly. The faster adoption reflects partly the dramatically lower switching costs of AI products — they typically integrate into existing workflows rather than requiring full replacement of incumbent tools — and partly the immediate productivity gains visible in early pilots.
The Open-Source Counter-Movement
An important strand of the AI-application landscape is the open-source movement. Meta’s Llama models, released in progressively more capable versions through 2023 and 2024, have produced a substantial ecosystem of derivative products. Mistral, the French company founded by former DeepMind and Meta researchers, has released competitive open models. Several Chinese AI companies — including Alibaba, Baidu, ByteDance, Moonshot AI and Zhipu — have released open or partially open models that have attracted substantial international developer adoption.
The open-source counter-movement has been important for both technical and political reasons. Technically, the availability of open models has produced a more competitive landscape than would otherwise exist, accelerating innovation and reducing the bargaining power of the leading proprietary providers. Politically, the open-source movement has provided alternatives to U.S.-controlled AI infrastructure for organisations and countries with reasons to avoid that dependency.
The Question of Default-Position
The most strategically significant question for the next several years of the AI-application category is which products will become the default for which use cases. Operating-system integration — Apple’s deep partnership with OpenAI for iOS, Google’s integration of Gemini into Android, Microsoft’s Copilot for Windows — gives certain products structural distribution advantages. Application-level integration — AI features built into existing applications like Notion, Slack, Photoshop and many others — produces different default-position competition.
The competition among default positions will probably produce different outcomes for different use case categories. The default for “general AI assistant when I have a question” may be different from “AI for writing my email” or “AI for editing my photos” or “AI for searching the web.” The product that wins each of these positions may well be different, and the cumulative landscape will look more like a set of specialised relationships than a single dominant assistant.
The Cultural Adjustment
The broader cultural adjustment to generative-AI applications is ongoing and uneven. Some user populations have integrated the tools into their daily workflows with limited friction. Others remain uncomfortable with the tools’ implications and have continued to avoid using them. Schools, universities, employers and many other institutions are working through how to incorporate the tools into their existing practices.
The cultural questions are likely to remain unresolved for years. The honest assessment is that consumer AI applications have become routine for a substantial share of educated adults in wealthy countries within three years of ChatGPT’s launch, and that the trajectory of adoption suggests they will become routine for a much broader population within the next several years. The structural implications — for labour, education, creativity, communication and the production of culture itself — are still being measured. The next decade will reveal how those implications resolve.