All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021. Most of the progress updates listed below are based on public announcements made by each of the four companies.
Twitter has made substantial progress against the commitments in both areas.
All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021. Most of the progress updates listed below are based on public announcements made by Twitter.
Twitter has made substantial progress against the commitments in both areas. On curation, it has announced the following features – now all available globally:
On reporting, Twitter started testing in Dec 2021 anoverhaul reporting process aiming to make it easier for people to alert them of harmful behavior. This new Report Tweet flow is now available globally (Twitter communicated it was available “in all languages” – further research is needed to clarify this.). Building on human-centered design, this new approach lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead it asks them what happened. This method is called symptoms-first. Twitter communicated it has enabled an increase of 50% of actionable reports during testing.
Twitter’s updated policy in December 2021 was seen as an important change to previous reporting mechanisms by the sector which did not previously centre the experience of the victim/survivor.
Beyond their specific TPDL commitments around curation and reporting, one should note the following work done by Twitter over the past year:
Twitter also has a Trust and Safety Council with a relatively stable membership over the last year. However, civil society experts questioned whether the format of relatively short calls with Trust and Safety Council members constituted meaningful engagement with CSOs, particularly where OGBV is not a primary focus of discussion in these forums. Some Global South members of the Trust and Safety Council explicitly expressed frustration that they are not engaged as cocreators but are presented with policy recommendations after Twitter has produced them and are asked for advice, with no accountability on whether their input is integrated.
beenhere
Problem addressed: When attacks against politically-active women are channelled online, the expansive reach of social media platforms magnifies the effects of psychological abuse by making those effects seem anonymous, borderless, and sustained, undermining women’s sense of personal security in ways not experienced by men. Research, advocacy, and policy development must ensure that women are able to meaningfully participate in two key spaces – the political realm and the online world – and the areas where they intersect.
Approach: Working with in-country partners, NDI developed lexicons of both gender-based harassing language and the political language of the moment, in order to examine the online violence experienced by politically-active women. These lexicons, each developed in local languages (Bahasa for Indonesia, Colombian Spanish, and a mix of Swahili and English in Kenya), were then used to conduct data scraping of a sample group of Twitter accounts within the target population of college-aged women and men who took part in the research.
Twitter was selected as the social media platform for quantitative data analysis because the majority of Twitter interactions are public, thereby enabling NDI to analyze large data sets retrospectively. The timeframe for the Twitter scraping was set within a six-month window of a significant political event – an election, referendum, political scandal or crisis – in each country. This quantitative Twitter analysis was complemented by qualitative analysis of the workshop discussions and responses from surveys administered to the same populations.
Stakeholders involved: A diverse range of civil society partners from each of the respective countries – Indonesia (5 partners), Colombia (13 partners), and Kenya (16 partners).
Impact: The research (conducted in 2021) called for similar interventions still echoed in 2021 as part of the TPDL including the following
• Contextually- and linguistically-specific lexicons of online violence must be created and then evolve
• Attention to minority communities and intersecting identities is essential
• If women’s rights initiatives are action- and solution-oriented, otherwise fatigued partners are eager and enthusiastic to engage
• Under-reporting of VAW-P in online spaces exists and merits investigation
On 28th July 2022, Twitter announced that it “aims to continually evaluate and improve the way we share information with the public. This year, we are launching the Twitter Moderation Research Consortium (TMRC). Through the Consortium, Twitter shares large-scale datasets concerning platform moderation issues with a global group of public interest researchers from across academia, civil society, NGOs and journalism studying platform governance issues.”
Based on public announcements, Meta’s progress against the commitments over the past year seem to mostly relate to curation on Instagram.
All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.
Based on public announcements, Meta’s progress against the commitments over the past year seem to mostly relate to curation on Instagram, with the following new features available globally:
On reporting, Meta communicated “more prominent” reporting features on Messenger.
Beyond the commitments but linked to OGBV, Meta has been working closely with the UK Revenge Porn Helpline on the launch of stopNCII.org to support victims of Non-Consensual Intimate Image (NCII) abuse.
beenhere
Problem addressed: Non-consensual intimate images refers to sexually explicit images and videos that are captured, published or circulated without the consent of one or more persons in the frame. They can have lasting and devastating impact on victims.
For years, photo – and video – matching technology was used to remove non-Consensual Intimate Image (NCII). Victims and experts expressed the need for a stronger platform adopted across the tech company that puts the victims first and reduces the risk of further spread of an image or video.
Approach: Launched in December 2021, StopNCII.org is a free tool designed to help victims stop the proliferation of their intimate images. When someone is concerned their intimate images have been posted or might be posted to online platforms like Facebook, they can create a case through StopNCII.org to proactively detect them.
The tool uses ground-breaking technology enabling the image to be identified and blocked without the user having to send the photo or link to anyone. It indeed creates hashes (or digital fingerprints) of intimate images directly on a user’s device. StopNCII.org then shared the hash with participating tech platforms so that they can detect and block the images from being shared online.
Stakeholders involved: This work is the result of a collaboration between Meta and “more than 50 civil society organizations from across the world”. Building on technology developed by Facebook and Instagram NCII pilots, the tool is operated by the UK Revenge Porn Helpline.
Impact: Participating tech platforms are currently limited to Facebook and Instagram. The initial version was launched in English. Meta and UK Revenge Porn Helpline are in the process of working with partners across the globe to translate the tool and adapt it to local contexts. One of the first non-English languages the tool was launched in was Urdu.
There is no data yet available on the impact of this tool. The operational partner – UK Revenge Porn Helpline – has successfully removed over 200,000 individual non-consensual intimate images from the internet.
It is also important to highlight Meta’s Oversight Board as a relevant development around transparency for Meta. Meta’s Oversight Board is an independent group looking to work with civil society organizations internationally, while exploring particular issues around safety and privacy. This Board published its first quarterly update in June 2021. While civil society organizations noted the potential impact of such a Board, its effectiveness, particularly in relation to OGBV, is unclear and yet to be evidenced.
TikTok have not publicly announced any changes to reporting processes or timelines. The development of clearer guidance on how to report different forms of content shows however progress in supporting users to navigate existing processes.
All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.
TikTok have not publicly announced any changes to reporting processes or timelines. The development of clearer guidance on how to report different forms of content shows however progress in supporting users to navigate existing processes.
Beyond their specific commitments, the following initiatives by TikTok over the past year are positive steps towards a better response to OGBV:
beenhere
Problem addressed: Transgender or non-binary people face hateful ideologies, and explicitly dismissive targeting content “through misgendering or deadnaming,” according to TikTok guidelines. Deadnaming refers to the act of calling a transgender person by a name that they no longer use.
Such content had already been prohibited, as allegedly mentioned by TikTok. However, criticism surged from creators and civil society organizations for TikTok to bring further clarity to their Community Guidelines.
Approach: a broader update designed to promote safety and security on the platform as well as to support the well-being of the TikTok community particularly users identifying as transgender and non-binary. In this regard, a feature was added to allow users to choose and highlight their preferred pronouns on their profiles. Additionally, and for transparency purposes, TikTok published its most recent quarterly Community Guidelines Enforcement Report. More than 91 million videos — about 1% of all uploaded videos — were removed during the third quarter of 2021 because they violated the guidelines.
Stakeholders involved: The policy update follows pressure from GLAAD, an LGBTQ media advocacy nonprofit, and UltraViolet, a US national gender justice advocacy group. According to GLAAD, the new policy incorporates recommendations that they made to TikTok for how they could better protect women, people of color and the LGBTQ community through an open letter signed by more than 75 stakeholders.
Impact: TikTok’s move to expressly prohibit this harmful content in its Community Guidelines raises the standard for LGBTQ safety online and sends a message that other platforms which claim to prioritize LGBTQ safety should follow suit with substantive actions like these.
Tiktok also recently set up its first Trust and Safety advisory council for the Middle East, North Africa, and Turkey (MENAT) region in February 2022. Previously TikTok set up similar councils in the US (March 2020), Asia-Pacific (September 2020) and Europe (March 2021). There is yet to be any form of TikTok advisory council across Sub-Saharan Africa and Latin America.
Google’s progress against the commitments is more difficult to assess based on public information, given the variety of entities within the company. For YouTube, we have not seen any announcements suggesting new positive steps on curation or reporting in relation to OGBV.
All four companies (Google, Meta, TikTok and Twitter) engaged in the Tech Policy Design Labs in 2021 are working on product innovations and prototypes related to OGBV and in line with their commitments. The following changes have been made since the platforms’ engagement with the TPDL in April 2021.
Google’s progress against the commitments is more difficult to assess based on public information, given the variety of entities within the company. For YouTube, we have not seen any announcements suggesting new positive steps on curation or reporting in relation to OGBV. Very recently, YouTube announced its YouTube Research Program, providing access to its data and tools to external researchers. The potential of this program to support OGBV-related research is yet to be explored.
Jigsaw – which is a Google entity – developed the Harassment Manager tool in collaboration with Twitter and civil society.
beenhere
Problem addressed: Women journalists, activists and politicians are facing disproportionate risks of online harassment. 63% of women journalists said they had been threatened or harassed online. Of those, roughly 40% said they avoided reporting certain stories as a result.
Although reporting mechanisms exist in social media platforms, processes and language can make it difficult for victims of abuse to take actions. There was a need for a tool that helps users deal with toxic comments following an incident of harassment, and document their experience.
Approach: The open source tool Harassment Manager has been developed by Jigsaw (part of Google), as announced in March 2022. This tool aims to help women journalists document and manage abuse targeted at them on social media, starting on Twitter.
More specifically, it helps users identify and document harmful posts, mute or block perpetrators of harassment and hide harassing replies to their own tweets. Individuals can review tweets based on hashtag, username, keyword or date, and leverage a Perspective API to detect comments that are most likely to be toxic.
Stakeholders involved: This initiative is the fruit of a collaboration between many stakeholders, starting with two tech giants (Google and Twitter). According to Jigsaw, journalists and activists with large Twitter presences have also been involved throughout the whole development cycle. Many NGOs in the journalism and human rights space were also part of this work, including: Article 19, Code for Africa, European Women’s Lobby, Feminist Internet, Glitch, International Center for Journalists (ICFJ), Online SOS, Paradigm Initiative, PEN America, Right To Be (formerly Hollaback!), The Thomson Reuters Foundation.
TPDL may have played a role in the development of this tool. Patricia Georgiou, Director of Partnerships and Business Development at Jigsaw, referred to their post-TPLD commitments as an incentive for them:
“Harassment Manager is the result of several years of research, development, and cross-industry collaborations to deliver on our commitment to tackle online violence against women.”
Impact: The code is now available on Github, open sourced for developers to build and adapt for free. As a first implementation partner, Thomson Reuters Foundation announced in July 2022 the launch of TRFilter, which builds on Harassment Manager’s code.