Three Recommendations on Digital Technologies and Data Privacy for the WHO Pandemic Agreement
Tomaso Falchetta, Molly Pugh-Jones and Tinashe Rufurwadzo
In December 2021, the World Health Assembly established an intergovernmental negotiating body to draft and negotiate a convention to strengthen pandemic prevention, preparedness, and response. The negotiations of this WHO Pandemic Agreement are now entering the final stages and a text may be agreed upon at the World Health Assembly by the end of May. Regretfully with each round of negotiations the language around human rights has been weakened and in the latest draft the principle on privacy, data protection, and confidentiality has been deleted. We present here three recommendations for inclusion in the agreement to address the gaps and challenges in pandemic response mechanisms.
1 Transparency, accountability, and data processing
Over the last few decades, global health emergencies have revealed large inadequacies in access to medicines and health services. More recently, the use of digital technologies has introduced new challenges. Although digital technologies for health can help fulfil the right to health, they also pose specific risks which must be recognised and mitigated.
The UN Special Rapporteur on the Right to Health has outlined the ways in which digital technologies in health can impinge human rights–including the right to privacy. Similarly, the WHO policy on data sharing in public health emergencies articulates that security and confidentiality remain central pillars of decision-making even in emergencies. During the COVID-19 pandemic, WHO and other United Nations entities noted the importance of security and confidentiality, insisting that data protection safeguards be lawful, necessary, proportionate, time-bound, and justified by legitimate public health objectives, in line with the understanding that “the same rights that people have offline must also be protected online.”
However, from the onset of the pandemic, we observed governments and the private sector justifying the processing of health data without clearly articulating the grounds for doing so, nor whether it met the test of necessity and proportionality.
Governments and companies expanded the purposes for which data was collected in the first place, including allowing for wide data-sharing arrangements, ignoring the data protection principle of purpose limitation. Reports of function creep with the repurposing of contact tracing apps for law enforcement goals emerged in various countries.
The effective protection of people’s data is a necessary precondition to ensure uptake and compliance on public health measures during pandemics and to support trust in public health as well as cooperation among states. Without robust protection of privacy, pandemic prevention and responses are bound to fail the people they are meant to protect. Hence, the deletion of a principle on privacy, data protection, and confidentiality sends the wrong message, implying that the well documented abuses of data during COVID-19 are not an issue worth preventing and addressing in future pandemics.
2 Mitigating healthcare inequities
Inequity in the global digital health landscape extends beyond technology access to encompass issues of data, information, surveillance, and discrimination. The digital divides, influenced by existing inequalities such as racism and gender disparity, disproportionately affect marginalized communities.
Digital divides and the associated inequalities were raised by many participants in a participatory action research study with young adults and communities in Vietnam, Ghana, and Kenya. For instance, one participant stated that ‘apps are creating invisible injustice between people…because there are people who don’t use smartphones, and those apps are creating distance for the poor who can’t access basic needs’. The research found that study participants, including women and girls, people living with HIV, and some key populations such as sex workers, experienced less access to the internet, less privacy (due to the need to share mobile phones with family and partners), and generally less familiarity with online health platforms and tools.
Governments tying access to public services, including healthcare, to a national digital identity exacerbates discrimination, entrenching digital divides and continuing the cycle of health inequality. Surveillance practices associated with digital initiatives contribute to mistrust among stigmatized communities, such as migrants and displaced persons, putting individual and public health outcomes at risk. Racial biases evident in technologies, such as those used during the COVID-19 response, also exacerbates inequities. For example, an independent review conducted in the UK found biases at every stage of the development and use of medical tools and devices. These biases are then magnified in algorithm development and machine learning, confirming a link, for example, between pulse oximetry devices, racial bias and COVID-19.
Lastly, the increased use of social media to source health and other information also increases the risk of stigma and discrimination because of the proliferation of dis- and misinformation on these platforms. For example, social media posts on sexual and reproductive health often include stigmatising content targeting the LGBTQ+ community.
To protect human rights, including the right to privacy, to health, and to nondiscrimination, the WHO Pandemic Agreement must acknowledge and mitigate these risks.
3 The role and regulation of industry
Companies worldwide pitched data-intensive products and services to governments facing the COVID-19 pandemic. They often repurposed tools to survey and track individuals, including the location data collected Big Tech and telecommunications companies, and surveillance and data analysis tools developed for law enforcement agencies.
The complexity of these technologies and the expertise needed to develop and maintain them have led to the growth of a ‘government-industry complex’, whereby governments often rely on one company for the delivery of a health service, in return for which the company has access to valuable data they can then use for commercial purposes. This ‘government-industry complex’ has evolved in a regulatory void and it is very likely that it will play a significant role in future pandemics.
The WHO Pandemic Agreement should not squander this opportunity to establish the obligations of states to regulate and monitor the role of industry in health sector emergency responses.
As negotiations are drawing to a close, member states have little time left to ensure that human rights are paramount within the Pandemic Agreement. It is imperative that the stand alone principle on privacy and data protection is re-instated, and that the provisions on human rights accountability and redress mechanisms, including for industry, are strengthened.
At these early stages of digital health, it is critical that we understand and act on the benefits and risks that data and technology pose to human rights and global health.
Tomaso Falchetta is Global Advocacy Coordinator at Privacy International, London, UK.
Molly Pugh-Jones is Advocacy Manager at STOPAIDS, London, UK.
Tinashe Rufurwadzo is the Communications and Digital Engagement Manager at Global Network of People Living with HIV, Amsterdam, Netherlands.
Each of these affiliated organisations are members of the Civil Society Alliance (CSA) for Human Rights in the Pandemic Treaty