Taking stock of Big Tech: the 2025 RDR Index
It’s been over a decade since Ranking Digital Rights (RDR) began analyzing the human rights performance of the world’s largest tech companies. The current geopolitical context, in which authoritarianism, political instability, and conflict are increasing globally, makes that assessment more important and relevant than ever. Add in the fact that these companies are racing to develop and deploy powerful artificial intelligence (AI) technologies before protective standards and guardrails are in place, and it becomes clear that the need for responsible operations is dire. The billions of people around the world who rely on these companies’ services cannot trust them without real transparency and accountability.
The 2025 RDR Index: Big Tech Edition assesses 14 tech giants on their policies and practices related to corporate governance, freedom of expression, and privacy. The RDR Index evaluates multiple services offered by these companies and enables users to see how each company rates in protecting human and digital rights, highlighting performance trends and areas of particular concern. While there are some significant improvements from past years, none of the companies earned an overall score above 50. That’s not good enough.
As Access Now has done in the past, we reached out to the companies regarding their performance on the RDR Index to push them to do better. We highlighted key recommendations from the RDR Index for each company, and asked that they respond on the areas for improvement we flagged.
Below, we share our key recommendations and reflections on the responses we have received so far.
The recommendations
Alibaba
Alibaba should be transparent about how it handles user information. The company should disclose all types of user data it infers and shares, as well as the retention periods for each type of user data it collects. It should also provide users with sufficient options to control how the company uses their data.
Alphabet
Alphabet should conduct robust and regular human rights impact assessments to assess and mitigate risks associated with its processes for policy enforcement, targeted advertising policies and practices, and its deployment and development of algorithmic systems.
Amazon
Amazon should disclose policies on government requests for content and data restrictions and publicly outline how it evaluates and responds to government demands, as well as provide transparency reports with relevant data.
Apple
Apple should improve its algorithm-related transparency by adopting human rights-centered principles and frameworks to guide its development and use of algorithmic systems and publish policies for algorithmic system development.
Baidu
To increase transparency on the use and development of algorithms, Baidu should disclose more information about how its algorithmic recommendation and targeted advertising systems function, as well as the mechanisms behind its use and development of algorithms.
ByteDance
TikTok should publish a policy that outlines and explains its approach to algorithmic system development. Additionally, the platform should provide users with the ability to control whether their data is used in algorithmic system development, including for model training.
Kakao
Kakao should begin reporting service-by-service data about its content moderation. Currently, these numbers are aggregated across its many services.
Meta
Meta should ensure that its remedy mechanisms cover all freedom of expression violations and enable users to submit privacy complaints. It should also provide Messenger and WhatsApp users with the ability to appeal content moderation actions.
Microsoft
Microsoft should conduct robust and regular human rights impact assessments to assess and mitigate risks associated with its processes for policy enforcement, targeted advertising policies and practices, and its deployment and development of algorithmic systems.
Samsung
Samsung should show evidence of basic human rights due diligence processes beyond its data privacy risk review. It should also ensure these processes extend to its own policy enforcement, targeted advertising practices, and use and development of algorithmic systems.
Tencent
To improve human rights due diligence, Tencent should add risk assessments examining its policy enforcement, targeted advertising practices, as well as algorithmic system use and development beyond AI.
VK
VK should reaffirm its commitment to human rights by explicitly committing to upholding freedom of expression and privacy as fundamental human rights, ensuring that these values are clearly embedded in its corporate policies, as it did at the time of the 2020 RDR Index.
X
X should conduct human rights impact assessments to identify risks that its business operations and services may pose to freedom of expression, privacy, and the right to non-discrimination. The scope of these assessments should include targeted advertising policies and the development and use of algorithmic systems.
Yandex
Yandex should publish detailed reports on government censorship demands and how it enforces its content moderation policies.
The responses
Unfortunately, most companies did not provide a response. As of October 20, we have heard from only Meta and Microsoft.
Meta was quick to respond. They welcomed their top ranking among peers on governance, and provided detailed thoughts on the highlighted recommendations, noting their existing remedy options for users. They also explained that their end-to-end encrypted offerings present somewhat different challenges than their public platforms. We particularly appreciate their offer of continuing dialogue on these issues, given the critical importance of improving remedies for privacy and freedom of expression violations on Meta platforms. You can read Meta’s full response here.
Microsoft was also responsive, providing a very detailed overview of their impact assessments and responsibility reporting. They told us they were conducting a third-party review “to assess the human rights most at risk through the company’s supply chain, operations, and product use,” and promised to share a summary of those findings. Of particular interest to us is their assessment of the use of their products in broad or mass surveillance in the West Bank or Gaza. Microsoft said that they planned to integrate the Gaza findings in their practices, and they recently shared those findings publicly, and took action — a first in the sector. We look forward to the company expanding such reviews to examine use of their technology in conflicts around the world. You can read Microsoft’s full response here.
It’s time to break the silence
We commend Meta and Microsoft for engaging in this process, when their U.S. peers, including Google and Amazon, have not. The silence from the other companies is deafening — particularly given the grave risk of facilitating human rights violations, including genocide. Independent, third-party analyses like the one from RDR, and follow-up dialogues with civil society, are a critically important path to improving corporate policies and practice.
We wish to underscore the urgent need for transparent reporting, impact assessments, and engagement with local as well as global experts on human rights in operations and supply chains. As companies and technologies move faster and grow ever more complex, such collaborations are absolutely essential for protecting people’s human rights and safety.