You can’t avoid artificial intelligence (AI) even if you want to, courtesy of Meta AI. The era of artificial intelligence, everywhere and all the time, began Monday with the rollout of an AI-powered smart assistant across Meta apps, including Instagram, Facebook, and WhatsApp, in India, one of Meta’s most critical markets. This means billions of users will interact with the technology for the first time, making it hard to escape – putting AI front and centre in the interfaces of the applications that users engage with every day.
Meta’s AI is pitched as a conversational chat window where you can ask questions and generate images. It works and behaves a lot like OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini. Even though OpenAI’s ChatGPT is popular, it is not yet a part of people’s everyday experiences. Despite a large user base, very few people actually use ChatGPT for everyday tasks. That is what separates Meta AI, which is embedded deep into the application interfaces that we interact with the most throughout the day on our smartphones, making AI technology unavoidable.
The AI chatbot is seamlessly integrated into search and messaging features across Meta’s family of apps, with processing being done in the cloud. If you don’t see Meta AI yet, check back later. The best way to identify Meta AI is through its logo: a ring that’s blue and pink and occasionally glows. On Facebook, tap the search icon on top, and you will find that the search bar has been replaced with one that says, “Ask Meta AI anything.” On Instagram, WhatsApp, and Messenger, you will notice how Meta AI is dominant over the search bars and appears as another chat. Start typing questions or random things to interact with Meta AI. There’s a lot you can do with Meta AI: animate an image, request a summary of news, or search for things like reels. It is basically the same idea as ChatGPT, now built inside WhatsApp or Instagram.
AI is everywhere
Meta’s aggressive push to bring artificial intelligence technology to its most popular apps shows how big tech companies are desperate to introduce AI technology into our digital lives. The social media giant started slowly by adding generative AI into the apps as a beta feature in select markets to test how users respond to the technology that is still in its infancy and whose broader implications are not clear. But given Mark Zuckerberg’s Meta aims to scale the AI tech, it now wants to expand AI into its most popular apps as fast as possible, a strategy it hopes could put it ahead of its peers by uniquely leveraging AI and making the technology mainstream through a massive user base.
But Meta is not alone in betting on artificial intelligence and meaningfully integrating it into the most-used apps and consumer devices. Last week, Apple unveiled Apple Intelligence, its implementation of AI. Apple Intelligence is not a generative AI app or a chatbot like ChatGPT. Rather, it is an integrated layer of artificial intelligence embedded in Apple’s upcoming operating systems. Apple Intelligence’s AI features include a revamped Siri that lets you get summaries of web pages, helps you compose emails, and even lets you create unique emojis at will, among other things.
Microsoft, too, is integrating its Copilot AI chatbot into Windows laptops as well as its core applications. Google is also not behind in the AI race as it speeds up the development of AI models and brings them to its products like Search, Docs, Gmail, and Android smartphones. Well-funded startups including OpenAI, Anthropic, and Perplexity are aiming to integrate their cutting-edge AI models into more apps and services.
Meta’s installed base is seen as an advantage
Meta’s bet to reach billions of people with its AI technology at scale also comes from the fact that users don’t need to invest in a new device or pay a fee to use Meta AI. That gives Meta leverage over other tech companies, at least for now. Take Apple Intelligence, for example. Technically, Apple is not charging users for AI features, but the catch is that since Apple Intelligence runs on-device, it requires a lot of processing power — meaning it’s limited to Apple’s latest devices with select Apple Silicon chips. This approach, which Apple isn’t doing intentionally, as the company clarified, will force users to upgrade to newer devices to use Apple Intelligence.
Microsoft is doing the same. Although it’s putting various Copilot features in everything from Office to Teams, its big consumer push is to integrate Copilot into the Windows and Surface experience. This means to get the full Copilot experience, users need to buy premium Copilot Plus PCs, featuring ARM chips from Qualcomm released last week. These devices have powerful neural processors that allow on-device AI to unlock complete AI experiences. Meanwhile, smartphone makers who are attempting to bring AI features to their phones are required to integrate Google AI, giving them access to the latest technology. However, integrating Google AI features means a price to pay to upgrade to devices to use the latest generative AI features. Other companies like Amazon are contemplating a fee to use their AI-powered Alexa voice assistant.
Meta’s ‘open source’ AI model approach is different
The way Meta is looking at the possibilities of generative AI and unlocking its true potential to its advantage also comes from how the company is building AI models powering AI features. Meta is among the few tech companies to “open source” AI models, which means anyone can incorporate its AI tech into apps and services for free. That gives Meta a huge leg up against OpenAI and other leading AI companies, who are not ready to open-source their proprietary AI models.
Meta describes LLaMa 3, its latest large language model that powers the Meta AI assistant tool, as the “best open models that are on par with the best proprietary models available today.”
In AI, large language models and foundational models play an important role. They are complex pieces of software that use algorithms trained on massive piles of data to make predictions or decisions. Tech companies need to constantly train and develop these AI models to improve their capabilities. However, the cost of training AI does not come cheap. OpenAI’s most recent model, GPT-4, is estimated to have cost around $100 million to train, several times more than GPT-3.
But Meta’s open-source move is not new. Tech companies have used open-source technologies to either catch up with the technology or to be ahead of the competition. Google did the same when it open-sourced its Android mobile operating system to better compete with Apple’s iPhone. That one move gave Google a lead in the market, capturing a large chunk of the smartphone segment.
Privacy concerns are still an issue
Privacy watchdogs and crusaders, however, have raised concerns about how Meta uses the data to train its AI services. The company’s track record with privacy has been questionable, and with Meta wanting the buzzed technology to be omnipresent by placing AI chatbots across its apps, questions are being raised about what the social media giant will do with people’s information. But Meta says it is complying with privacy laws, and it respects people’s privacy while training generative AI models.
“We didn’t train these models using people’s private posts. We also do not use the content of your private messages with friends and family to train our AIs,” Meta said in a blog post last year.
Meta has been using public posts to train AI models in the US and other markets where Meta AI has been live. However, after a backlash from users, the company had to pause training AI models using data from its users in the European Union and the UK. For now, Meta has delayed the release of Meta AI in Europe after privacy and regulatory barriers.
“We’re disappointed by the request from the Irish Data Protection Commission, our lead regulator, on behalf of the European DPAs, to delay training our large language models (LLMs) using public content shared by adults on Facebook and Instagram—particularly since we incorporated regulatory feedback and the European data protection authorities (DPAs) have been informed since March,” Meta wrote in a blog.
The lack of trust in Meta and what it will do with public data to train AI models might be the biggest barrier in front of the biggest social media company in its mission to put AI everywhere. While those living in India and the US have less strict privacy laws, data protection regulations in Europe and the UK put consumer privacy first. Europe’s GDPR and DMA regulations have been tough on big tech, especially Meta, which needs vast amounts of data to train their AI systems. This applies to every tech company looking to roll out AI features on devices and apps.
Faith in tech companies is already at an all-time low due to how they have operated for so long. However, new laws and regulations are forcing companies to bow and make changes to their business models and products. Apple is reportedly delaying Apple Intelligence generative AI tools in Europe, citing EU law. This is despite the company’s assurance while unveiling Apple Intelligence at a recently held developer conference that most AI processes will be handled on-device by Apple’s processors, but some will need to run through cloud servers to make use of larger and more complex language models. Apple says it has created secure private servers that will not store information but has not clarified if users will know when their information is moving from their device, or if they will be able to turn it off. Its tie-up with OpenAI’s ChatGPT has sparked privacy concerns, although Apple has made it clear no personal data is stored by the ChatGPT maker.