Welcome to WhatsApp Watch from the Digital Witness Lab at Princeton University. Sign up for our newsletter, where we’ll be sharing our work monitoring one of the most complex and least understood communications platforms that has become the go-to source of information for many people across the globe, especially in the global majority.
Part of the struggle to assess what role WhatsApp plays in the growth of familiar ills like propaganda, misinformation, and inciting violence is that the app, with its ever-growing set of features, serves a wide range of messaging needs in people’s lives.
Once thought of merely as another, economical way to send text messages, WhatsApp is now referred to by some as the “world’s favorite social media platform.” Its 2.8 billion users exchange some 100 billion messages through the app each day. And increasingly, as opposed to the popular notion, those messages are not always between individuals and small groups of family and friends, but often among groups of strangers united by interest, geography, politics, or simply by accident. And while WhatsApp’s ubiquitous chat interface makes it easy to use, it also has the side-effect of making it harder to distinguish between private and public communications. Its end-to-end encrypted transport layer provides protection against interception attacks but it doesn’t protect users against bad actors in the groups they’re a part of.
The popularity of the platform, and the way it has embedded itself with its users, have also made it particularly ripe for spreading misinformation and inauthentic content. This is of particular concern in the global majority where the platform is incredibly popular: 20 percent of WhatsApp’s user base comes from India and Brazil alone. There have been a number of documented instances of the platform spurring misinformation-infused violence in these countries. But in these countries, as well as others such as Nigeria and Mexico, the platform is also used for sometimes more innocuous and sometimes not-so-innocuous political propaganda. Political parties increasingly use WhatsApp to quickly and relatively privately communicate with potential voters and influence their worldviews. Journalists and researchers have documented the role the platform has played in enabling the rapid transmission of false information in the previous national election cycle in these countries.
In response to public criticism, WhatsApp took various steps to limit the spread of inauthentic content such as launching a tipline, introducing forwarding limits and putting limits on group sizes. The platform claims these measures seem to have had some effect in reducing the spread of inauthentic content on the platform. However the decentralized and private nature of the platform makes it hard to study the efficacy of these measures.
Content is shared by people in chats, not recommended by algorithms in feeds. The shared content is only viewable by other members of the chat and not accessible on the public internet. It's also end-to-end encrypted, meaning no one other than the chat participants, not even WhatsApp, can view the content being shared within it.
Because the platform is so unwieldy, the techniques typically used to independently study social media platforms such as web scraping, are ineffective. And unlike other messenger applications like Telegram, there is no API developers can rely on for programmatic data collection either. To tackle this problem, researchers have developed novel methods for studying the platform ranging from sock puppet to data donation approaches. However, traditional academic researchers aren’t incentivized to invest in the engineering effort required to build a platform that collects large volumes of high quality data while also protecting users' privacy.
This is where WhatsApp Watch comes in.
In 2024 we launched our platform and began persistently monitoring what is being shared in publicly accessible WhatsApp Groups. Our focus is on measuring how misinformation spreads through the platform, how effective WhatsApp’s virality-limiting measures are to mitigate the rapid spread of false information, and how the platform is being harnessed by political parties and their consultants to surveil, profile and target voters.
WhatsApp Watch builds on our experience working in hybrid research-journalism roles and aims to enable the independent study of WhatsApp by journalists and researchers who don't have the means or capacity to take on the engineering challenges of collecting data from the platform. Moreover, it is designed to minimize risk and protect the privacy of data donors. Our work is also informed by the pioneering research done by Kiran Garimella and Simon Chauchard as well as others who first built tools to meaningfully study the platform.
We are kicking off this project in India where elections are already underway, followed by other countries later in the year. In the coming weeks we will begin to publish our investigations, methodology and data on our website. If you are interested in following along please sign up for our newsletter on our website .
We are also going to work closely with journalists to support their investigations with our tools and data. If you are a journalist or editor interested in studying the role WhatsApp plays in please reach out to us at info@digitalwitnesslab.org.