Meta, in a recent announcement, made an astonishing declaration regarding the expansion of its content recommendation algorithms. The company is preparing to develop behavior analysis systems that will be significantly larger in scale compared to existing large language models such as ChatGPT and GPT-4. The question arises: is such an immense scale truly necessary?
Periodically, Meta reaffirms its commitment to transparency by shedding light on the inner workings of some of its algorithms. Sometimes these explanations offer valuable insights, while other times they raise further inquiries. This particular instance falls somewhere in between.
In addition to providing “system cards” that clarify how AI is utilized in specific contexts or applications, the social and advertising network released an overview of its AI models. For example, accurately distinguishing between roller hockey and roller derby in a video with visual similarities can aid in providing appropriate recommendations.
Undoubtedly, Meta has been actively involved in the field of multimodal AI, leveraging data from various modalities (such as visual and auditory) to enhance content comprehension.
Although only a few of these models are publicly released, their internal use is frequently mentioned in relation to improving “relevance,” a term often synonymous with targeting. However, selected researchers are granted access to these models.
What adds intrigue is Meta’s mention of its computational resources. The company reveals that its recommendation models are designed to comprehend and model people’s preferences by incorporating tens of trillions of parameters. This scale is orders of magnitude larger than the most massive language models currently employed.
When pressed for more specifics, Meta clarified that these tens-of-trillions models are still theoretical. The company believes its recommendation models have the potential for such immense parameters. Although the phrasing may resemble promising 16-ounce burger patties, the reality is that they are still at the quarter-pounder stage. Nonetheless, Meta expresses its clear intention to ensure efficient training and deployment of these extremely large models at scale.
It appears highly unlikely that a company would invest in building expensive infrastructure for software it has no intention of creating or utilizing. Although Meta has neither confirmed nor denied actively pursuing models of such magnitude, the implications strongly suggest that they are actively working towards this goal.
The phrase “understand and model people’s preferences” should be interpreted as the behavioral analysis of users. In reality, an individual’s preferences could likely be represented by a concise list of around a hundred words. It is puzzling at a fundamental level why a model of this size and complexity would be necessary to handle recommendations, even for billions of users.
The problem space, however, is undeniably vast. With billions of pieces of content, each accompanied by metadata, and a multitude of complex vectors indicating correlations such as followers of Patagonia tending to donate to the World Wildlife Federation and purchasing increasingly expensive bird feeders, it is perhaps not surprising that a model trained on such comprehensive data would be significantly larger. In fact, it is claimed to be “orders of magnitude larger” than any existing model, encompassing practically every accessible written work.
While there is no official count for the parameters in GPT-4, and parameter count alone is not a definitive measure of performance, ChatGPT consists of approximately 175 billion parameters, and GPT-4 is believed to exceed that number while falling short of the wild claim of 100 trillion. Even if Meta’s statements are somewhat exaggerated, the scale of the model remains overwhelmingly massive.
Consider the implications: an AI model as large as, if not larger than, any existing model. It ingests every action taken on Meta’s platforms, and its output predicts your future actions and preferences. Undoubtedly, this notion can be disconcerting.
Meta is not the sole player in this field; TikTok has pioneered algorithmic tracking and recommendation, establishing its social media empire based on an addictive feed of “relevant” content that keeps users scrolling relentlessly. Competitors openly envy TikTok’s achievements.
Meta aims to dazzle advertisers with scientific jargon, showcasing its ambition to create the most substantial model and employing passages filled with technical terms. These efforts are designed to persuade advertisers that Meta leads in AI research and excels at “understanding” people’s interests and preferences.
For those who doubt, “more than 20 percent of content in a person’s Facebook and Instagram feeds is now recommended by AI from people, groups, or accounts they don’t follow.” It seems that our desires have been fulfilled. AI is functioning remarkably well.
However, this also serves as a reminder of the underlying apparatus at the core of Meta, Google, and other companies whose primary motive is selling ads with increasingly precise targeting. The value and legitimacy of such targeting must be constantly emphasized, even as users revolt against it and advertising multiplies, becoming more pervasive rather than improving the user experience.
Meta has never presented a sensible approach like asking users to choose from a list of 10 brands or hobbies they prefer. Instead, they prefer to monitor users’ browsing activities, such as searching for a new raincoat and then tout it as an advanced feat of artificial intelligence when raincoat ads appear the next day. It remains unclear whether this approach is truly superior to the former method, and if so, to what extent. The entire web has been built upon a collective belief in precise ad targeting, and now the latest technology is being deployed to uphold it in the face of a new wave of skeptical marketing expenditures.
Evidently, a model with ten trillion parameters is deemed necessary to determine people’s preferences. Otherwise, how could one justify spending billions of dollars on its training?