Most recent
Blog
4.4.2024
Muut uutiset

Blog: AI and Epistemic Rights

Image created with Adobe Firefly, with the prompt: Epistemic Rights and Artificial Intelligence
.

AI is everywhere and has been working quite successfully for a while. For example, it helps us search for content online, personalize social media feeds, and spellcheck texts. Behind the scenes, it forecasts weather and events in financial markets. Among other things, it also streamlines processes in fields as different as journalism and medicine. 

Still, AI debates have been ubiquitous for the past year, whether in the news or policymaking circles. To be sure, guidelines and even regulations for AI have been under consideration for some time: UNESCO’s  Recommendation on the Ethics of Artificial Intelligence was adopted by its Member States in 2021. It was the same year that the European Union drafted its AI Act, which the members of the European Parliament accepted in mid-March 2024. 

Specifically, the rise of generative AI (GenAI) as an everyday tool has recently prompted increasing concern about its development. Worries include potential new forms and massive volumes of maliciously created disinformation and hate speech, as well as biased and “hallucinating” content AI creates by accident due to the limits of its data. Indeed, many existing policies and regulatory ideas, including the AI Act, already focus on mitigating risks posed by AI. 

The Less Visible View: Citizens' Epistemic Rights

A less-considered aspect is that AI development is intricately entrenched with the expansion of tech giants' power in developing technological infrastructures. This further solidifies global technological dependencies that policy approaches regulatory efforts such as the AI Act cannot address. At the same time, certain AI developments, such as those regarding military technology, remain outside of public and policy debates.  When these dimensions of the proliferation of AI are not covered in media or publicly debated in parliaments, citizens have no opportunity to understand and have a say in the directions AI development should take. 

More fundamentally, AI, in its many forms and applications, significantly impacts how we access, gather, and process information. Because information is "synthetic," i.e., it has no identifiable origin, citizens lack the ability and tools to assess the veracity of information and knowledge. In addition, the data used by artificial intelligence (LLM models) entails linguistic, cultural, ethnic, and other biases and inequality factors – partly because the material is largely based on English-language data, machine-translated into the user's desired language. 

These features of AI cause an epistemic crisis: They fundamentally alter what is embedded in the idea and ideal of democracy: common forums where we can exchange forms of trustworthy knowledge and culture. What has enabled the formation and citizens’ participation in such places in the mass media and early digital media era is their access to and availability of diverse content, as well as privacy and dialogical modes of public communication.

Even before the proliferation of everyday AI, digitalization has challenged the existence and function of these shared communicative spaces. In the recent book Epistemic Rights in the Era of Digital Disruption (Palgrave 2024; open access), Hannu Nieminen argues for securing citizens’ rights to trustworthy, accurate, and accessible knowledge that they have competencies to understand and use:

For democracy to adhere to its normative principle, citizens must have fundamental epistemic rights related to knowledge and understanding. These include: 

  • Equality in access to and availability of all relevant and truthful information that concerns issues of will formation and decision-making, 
  • Equality in obtaining competence in critically assessing and applying knowledge for citizens’ own good as well as for the public good, 
  • Equality in public deliberation about will formation and decision-making in matters of public interest, 
  • Equal freedom from external influence and pressure when making choices. 

Promoting Epistemic Rights to AI, with AI

Epistemic rights are not a new concept but entail a long history in philosophy and related fields. However, discussions about the right to knowledge have recently focused on communication and media. Concerned voices have warned of an epistemic crisis in the public realm and public discussion caused by the avalanche of online content, often indiscriminate in terms of quality or veracity and the way we process that information. In the era of AI, these rights can be argued as central – yet seldom debated in academia, let alone in public arenas or policy-making circles. And yet, AI could also be seen as the promoter and protector of citizens’ AI rights:

Equality in access to and availability of all relevant and truthful information can be promoted, for instance, by allowing users to understand – and influence – the ways in which algorithms provide content to them. This means people can make informed choices about the information and entertainment they consume. In addition, citizens can be provided access to and included in transparent policy debates and decisions on how and where AI is used in public and private sectors. AI applications can enhance people’s abilities to participate in politics and other forms of engaged citizenship. A prime example is machine translation, which can support knowledge-sharing and interaction for groups outside of linguistic majorities. 

Still, AI in ensuring epistemic rights is only effective with sufficient AI literacy and, relatedly, people’s opportunities to decide what kind of AI they want. Policy-makers and developers of AI systems should ensure that citizens have enough real opportunities to influence the direction of the development of systems that utilize artificial intelligence. To do so, citizens need the capabilities to understand the forms and uses of AI, their own opportunities to benefit from useful developments, and their need to protect themselves against the harmful consequences of AI. Ensuring literacy and participation in AI decision-making are the most urgent tasks for policymakers and others wishing to promote AI for democracy.

Writers:

Hannu Nieminen of DECA is a Professor at Kaunas University, Lithuania, and Emeritus Professor at the University of Helsinki. 

Kirsi Hantula is a Leading Specialist for the Digital Power and Democracy project at the Finnish Innovation Fund, SITRA.

Minna Horowitz of DECA is a Docent (Adjunct Professor) at the University of Helsinki, Finland, and a Fellow at St. John’s University, New York.

Back

arrows & pagination

Style Guide