MCA and Meta collaborate to introduce WhatsApp Helpline

MCA and Meta collaborate to introduce WhatsApp Helpline

The helpline will be available for the public to use in March 2024.

Whatsapp

Mumbai: The Misinformation Combat Alliance (MCA) and Meta are working on launching a dedicated fact-checking helpline on WhatsApp in an effort to combat media generated using artificial intelligence which may deceive people on matters of public importance, commonly known as deepfakes, and help people connect with verified and credible information. The helpline will be available for the public to use in March 2024.

The industry-leading initiative will allow MCA and its associated network of independent fact-checkers and research organisations to address viral misinformation – particularly deepfakes. People will be able to flag deepfakes by sending it to the WhatsApp chatbot which will offer multilingual support in English and three regional languages (Hindi, Tamil, Telugu).

The MCA will set up a central ‘deepfake analysis unit’ to manage all inbound messages they receive on the WhatsApp helpline. They will work closely with member fact-checking organizations as well as industry partners and digital labs to assess and verify the content and respond to the messages accordingly, debunking false claims and misinformation.

The focus of the program is to implement a four-pillar approach – detection, prevention, reporting and driving awareness around the escalating spread of deepfakes along with building a critical instrument that allows citizens to access reliable information to fight the spread of such misinformation. With millions of Indians using WhatsApp, the collaboration between Meta and MCA represents a continued effort to empower users with tools to verify information on its service.  

Commenting on the partnership, Meta director, public policy India Shivnath Thukral, “We recognize the concerns around AI-generated misinformation and believe combatting this requires concrete and cooperative measures across the industry. Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. As a company that has been at the cutting edge of AI development for more than a decade, we remain committed to work with industry stakeholders to introduce common technical standards for AI detection, transparency solutions and policies, along with empowering people on our platforms with resources and tools that make it simpler for them to identify content that has been generated using AI tools and curb the spread of misinformation.”

“The Deepfakes Analysis Unit (DAU) will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India. Its formation highlights the collaboration and whole-of-society approach to foster a healthy information ecosystem that the MCA was set up for. The initiative will see IFCN signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta's support. We hope the DAU will become a trusted resource for the public to discern between real and AI generated media and we invite more stakeholders to be a part of the initiative.” said Misinformation Combat Alliance president Bharat Gupta.

Meta’s robust fact-checking program in India includes partnerships with 11 independent fact-checking organizations that help users to identify, review, verify information and help prevent the spread of misinformation on its platforms. On WhatsApp, we encourage users to double-check information that sounds suspicious or inaccurate by sending it to WhatsApp tiplines. People can also follow dedicated fact-checking organizations on WhatsApp Channels to receive verified, accurate and timely updates. In addition to the fact-checking program, WhatsApp addresses misinformation by limiting forwards and actively constraining virality on the platform.

Our approach to addressing deceptive synthetic media at Meta has several components, including working to investigate deceptive behaviors like fake accounts and misleading manipulated media; our third-party fact-checking program, in which fact checkers rate misinformation, including content that has been edited or synthesized in a way that could mislead people; and engaging with academia, government and industry. We have recently announced an AI labeling policy. In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry-standard indicators that they are AI-generated.

We have also pledged to help prevent deceptive AI content from interfering with this year’s global elections. The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories, including Meta, pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps.

MCA is a cross-industry alliance bringing companies, organizations, institutions, industry associations, and entities together to collectively fight misinformation and its impact. Currently, MCA has 16 members including fact-checking organizations, media outlets, and civic tech and are inviting strategic partners to collaborate in this industry-wide initiative to combat misinformation and create an enlightened and informed society.