AI and Equality


Community Commentary: 

Artificial Intelligence, Cultural Rights, and the Right to Development

AI & Equality Community Commentary in response to OHCHR’s Call for
input for EMRTD study “Artificial Intelligence, Cultural Rights, and the Right
to Development” 

This AI & Equality Input Comment is in response to the Office of the High Commissioner for Human Rights (OHCHR)’s on their thematic study on “Artificial Intelligence, Cultural Rights, and the Right to Development.
It offers a comprehensive analysis of the potential benefits, existing risks, and necessary regulatory steps concerning Artificial Intelligence (AI) and cultural rights, with a specific focus on the implications for the right to development in developing and least developed countries.
The community commentary  highlights a significant divergence of potential long-term outcomes, ranging from profound cultural impoverishment due to homogenization and control to a more optimistic scenario of democratized creation and enhanced preservation. Ultimately, the paper strongly argues that effective, binding AI regulation is essential to safeguard cultural rights, address systemic inequalities, and ensure that technological progress genuinely enhances cultural diversity and equitable participation globally

✍️ Community Authors: Emma Kallina, Abdullah Hasan Safir, Amina Soulimani, Anesu Makina, Anna-Maria Gueorguieva, Anna Neumann, Ann Borda, Ariane Bar, Chandrashekar Konda, Cinthya Vergara, Francesca Lucchini, Majiuzu Daniel Moses, Özge Çağlar, and Warren Bowies

Key Findings and Recommendations

The input identifies several key areas where AI can both benefit and pose significant risks to cultural rights and the right to development:

  • Potential Benefits of AI for Cultural Rights

    • Preservation and Sharing of Local Knowledge: AI systems can be utilized to capture and analyze local knowledge, such as traditional agricultural practices and information on local flora and fauna.

    • Low-Resourced Languages: There is a special interest in preserving certain low-resourced languages by training Large Language Models (LLMs) in community-led projects. Examples include Lelapa.ai in South Africa, which designs LLMs for various African languages, and the Traductor Rapa Nui project on Easter Island.

    • Increased Accessibility: AI can lower language barriers for underrepresented communities and increase accessibility for people with disabilities, for instance, through tools for the visually impaired or automatically generated alt texts.

    • Cultural Heritage: AI can help remove barriers posed by time or space through virtual or augmented reality tours of museums and cultural heritage sites.

  • Disproportionate Risks and Drawbacks

    • Exacerbated Digital Divide: Developing and least developed countries are heavily deprived of AI’s benefits due to inadequate infrastructure, limited resources, and severe language barriers, with most online training programs and LLMs optimized for English.

    • Cultural Bias and Homogenization: AI models, especially Generative AI (GenAI), are dominated by the cultural norms and aesthetics of English-speaking countries, leading to the misrepresentation, stereotyping, or failure to produce local forms of art and architecture for non-Western cultures. This perpetuates a “white default” and constitutes representative harms.

    • Appropriation and Exploitation: GenAI models are predominantly trained on vast datasets of publicly available cultural content (text, art, video) without adequate consent or compensation for rights-holders, placing the burden of protection on the creator. This is particularly severe when traditional knowledge and religious symbols of certain communities are appropriated and violated.

    • Devaluation of Creative Labour: AI risks replacing creative workers, devaluing their labour, and making creative careers economically less viable and less stable, thereby undermining the cultural right of creators to benefit from their work.

    • Algorithmic Bias and Censorship: Algorithmic bias generates systematic errors reflecting existing societal discriminations and reinforcing Western cultural frameworks, forcing minority cultures to conform to dominant technological paradigms or face digital exclusion. AI-driven content moderation systems, lacking cultural competence, disproportionately censor cultural expressions from the Global South.

    • Disproportionately Affected Groups: Independent artists, small creative businesses, and artists from marginalized or non-Western cultures face a “triple exclusion”: limited access, extensive cultural data extraction without benefit, and massive under-representation/distortion in AI systems, leading to “algorithmic erasure”.

       
  • Protecting Cultural Rights Through Regulation

    • Regulation is Necessary: Self-regulation by technology companies is fundamentally insufficient due to market concentration and the conflict between corporate interests and the public good. Binding AI regulation is deemed effective and necessary.

      • Algorithmic Impact Assessments: Requiring comprehensive evaluation of AI systems’ effects on cultural rights, involving non-technical experts.

      • Opt-in Mechanisms: Legislating for an opt-in mechanism as the default for using cultural data in commercial AI models.

      • Key Regulatory Mechanisms: A multi-layered regulatory framework, combining international treaties with diverse domestic regulations, should be implemented. Key mechanisms include:

        *Transparency Requirements: Mandating AI companies to disclose the training data sources used.

        *
        Licensing and Compensation Schemes: Establishing frameworks that require GenAI developers to negotiate licenses and implement royalty or compensation systems.

We’re creating a global community that brings together individuals passionate about inclusive, human rights-based AI.

Join our AI & Equality community of students, academics, data scientists and AI practitioners who believe in responsible and fair AI.