Challenges in Analyzing WhatsApp Data: Insights from Research Associate Qazi Firas in India

11/25/2024
Illustration of a man sitting at his computer, taken from behind. The illustration shows the back of his head, the outline of a laptop, and a desk full of papers

At the Digital Witness Lab, we build platforms and tools in service of public-interest research. This means a lot of our work—coding, building, and labeling—happens behind the scenes. Our WhatsApp Watch project has led us to create one of the largest sets of WhatsApp message data. To understand what we’re learning from the data, we turn to our team members working on it. First up: Qazi Firas, one of our India-based Research Associates.

The conversation has been edited for clarity.

What do you do at the Digital Witness Lab? I work on the WhatsApp Watch project. We collect data from public WhatsApp groups across India and then analyze it. I’ve spent a lot of time reading and labeling messages to find common themes, patterns, and categories that naturally emerge.

Why WhatsApp? WhatsApp, at least in India, is used by everyone. It’s our primary means of dissemination of all kinds of information—not only regular people’s private messages but also news and political campaigning. It’s facilitating discourse to a large extent but WhatsApp doesn’t closely moderate content on the platform.

What is your research process like? For some of our analyses of the themes showing up in WhatsApp groups, I looked at highly forwarded messages. These are messages that have been forwarded five or more times. I read all of the highly forwarded messages in our data set, analyzed what the conversation was, and labeled the messages. I’ve built a broad list of labels—for example, if there is hate speech it can be against Hindus, Muslims, or another religious minority. If there is political content, it can be in favor of or against the ruling Bharatiya Janata Party (BJP) party, in favor of or against the India National Congress party, or something else. The end goal is to build a large enough pool of keywords so that we can monitor these themes automatically, but right now it’s all manual labeling.

Are you only reading the text of messages? How do you deal with messages that have images, emojis, or other elements? We’re looking at all aspects of a message. In fact, roughly 60 to 70 percent of the messages we collect have some sort of media, whether it’s an image, GIF, or URL. Understanding context is very important in these cases. For example, with emojis, sometimes they’re used to hide what people truly want to say. People know words can be flagged while emojis can be interpreted many ways. We found the pig face emoji being used as a slur for Muslims. In the general context around India, people rarely use this emoji. But understanding the context in which this emoji has been sent allowed us to see the anti-Muslim message. Now, when researching hate speech on our platform, I know to look out for the pig emoji.

What are some other methods you use to understand context? India is a very diverse country—we have WhatsApp messages in at least 15 or 16 languages coming in—so context is essential. But sometimes when we’re pulling data into Excel sheets we can only see each individual message, not the full conversation where the said message was shared. So the lab built a tool called the WhatsApp “Context Viewer,” which allows us to look up a particular message that we’re analyzing, and then see the messages that came before and after it within its WhatsApp group. This would be very difficult to do without the tool, and the user interface even replicates a WhatsApp chat, making it very easy to manage.

What has the Context Viewer tool allowed you to see? This tool is crucial because unlike other platforms like Facebook or Instagram, WhatsApp invites conversation around whatever you share. You’re not just saying things out into the world, but you’re expecting some sort of engagement where people respond. In a lot of the groups we’re tracking, that looks like a constant validation of the content that has been pushed out, even if it’s disinformation or hate speech. But one time while going through the data I found a message of disagreement: Someone was asking another member of a community group not to share certain content. Thanks to the context viewer I was able to plug in that message and see that this person was asking a community member to stop sharing hate speech against Muslims as well as other members’ reactions to that request. Hate speech, misinformation, and problematic content exist across all platforms. But on WhatsApp, these messages don’t exist on their own—we’re often reading conversations. Having a lab that’s able to make tools like “Context Viewer” is helpful to really see what the discourse around a certain topic is.

Has the lab built any other tools to support this research? The team has also built a “Reverse Image Search” tool. Basically you can upload an image and see whether it exists within our WhatsApp data or not, and even adjust your search along a scale of similarity. The tool also tells us the virality of certain images. For example, we can see the most shared image in our data set over a week or over a month. This helps us see what’s gaining traction.

How do you keep labels consistent across the team? Are there discrepancies? We use the inter-rater reliability method which measures agreement between labelers. We do this by getting a set of 100 random messages that we haven’t seen before, labeling each of them, then meeting amongst labelers to see where we agreed and disagreed.

What happens when you disagree? This showed up while working with Rest of World to analyze how India’s ruling BJP party uses WhatsApp as part of its campaign strategy. We did three rounds of inter-rater reliability testing. One type of message that we didn’t agree on were messages that featured news articles. WhatsApp group members would sometimes send screenshots or PDFs of news articles about the upcoming elections, and labelers disagreed about whether this was political content in favor of a particular political party, or neutral. There was a lot of debate about it. We ended up deciding to create a new label for news media and agreed to only consider these messages political if the article featured extremely manipulated headlines or propaganda.

The Lab has collected more than three million WhatsApp messages—how do you manage that amount of data? What advice would you have for others who want to study WhatsApp? One thing I would say is to not get lost in the data. Sometimes I’ve got an Excel sheet that has 4,000 individual messages to sift through, so you can get lost easily. It’s important to be very concise with what you’re looking for, and if you don’t find it after a reasonable amount of time, you need to learn to let go.

You’ve been researching the dissemination of misinformation for quite some time now. What makes your work at the Digital Witness Lab different? We’ve all known how influential WhatsApp has been for political actors in India in terms of changing discourse—but through the lab we’re finally getting to see the strategies and mechanisms that they use. We’re able to see that the increase in division amongst certain communities isn’t happening by itself, but that there’s a very systemic way that people are being targeted by this larger political machine through WhatsApp.

Recent News