Contact Us
Search Icon

Suggested region and language based on your location

    Your current region and language

    Submit

    Contestability tool could enhance digital rights and built trust in AI

    14 November 2024: A standardized contestability or feedback tool built into all AI systems could enhance digital rights and build trust and confidence in AI as a force for good, provided questions of accessibility, cost, liability, scalability and how to satisfactorily resolve complaints could be addressed.

    The findings come in a paper published by BSI following workshops bringing together technical specialists, academics, consumers rights experts and more, designed to explore the challenges involved in developing an AI reporting or contestability tool, and the benefits of this over mechanisms designed by individual providers. According to separate BSI research of 4,000 people in the UK, India, Germany and China, 62% want a standard way of flagging concerns, issues or inaccuracies with AI tools so they can be addressed.

    BSI, the business standards and improvement company, is at the forefront of promote responsible AI, having recently published the first-in-kind global certifiable AI governance standard and certification package (ISO 42001). Providers are already subject to local data and privacy requirements, yet there is no standardized way of flagging when AI is displaying bias or producing problematic outcomes.

    The paper finds that, given the complex international AI supply chain, a shared responsibility model could empower all parties to address issues effectively and transparently. The paper emphasize potential benefits to providers, including it being a way to gather user feedback to improve tools and avoid backlash or reputational damage if issues are identified, as well as a strategy to strengthen user trust.

    The authors advise a contestability tool should be simple, with non-technical language for users of varying digital literacy. Raising public awareness of this would be critical, as would be communicating any wider legal rights. Similarly, clarity would be needed on who receives a contest report, who is liable for harms, and the responsiveness users can reasonably expect. Mechanisms would need to be proportionate to a situation’s severity and maintainable as AI models change, so issues are not reproduced. For a tool to be effective, the paper says there must be confidence contests will be confidential to avoid reprisals, and independently, consistently and impartially assessed, with assurance that, where required, reports will lead to tangible technical improvements and possibility of redress.

    Mark Thirlwell, Global Digital Director at BSI, said: “AI has the potential to be a dynamic force for good, transforming society and providing new ways of delivering healthcare, building homes or producing food, and more besides. But this must be underpinned by confidence amongst users that the guardrails are in place for the safe and ethical use of AI.

    “Our focus on building trust led us to this research study into contestability mechanisms and the benefits of a standardized approach. There are certainly many challenges to this but there is no doubt that such a tool built into all AI systems could enhance digital rights and build trust in AI as a force for good. BSI is committed to playing a role in shaping how businesses globally respond to embrace AI to build a positive future for all.”

    Given the international nature of AI development, the interconnectedness of ICT, the dominance of major players, and the likelihood of multiple tools being used together, the paper discusses the complexity of accountability and transparency in the AI supply chain. Issues raised include cases where some adhere to standards while others do not, or the limitations faced by AI providers in accessing necessary data due to privacy constraints. In some cases, changing a certain AI feature may be out of the provider’s hands.

    Other barriers to a standardized tool include the reputational implications of embracing transparency and public reporting, the possibility of contestability being exploited for reputational harm, the enforcement of legal rights across different jurisdictions, liability questions, and keeping pace with AI advancements. To minimize costs, there was a suggestion AI automation could be used to triage contests.

    The paper also explores desired features for a tool, among them a process that allows for joint or collective claims, which can be more effective for addressing systemic issues. Other features could include ”bias bounties”, financial incentives for the discovery of unwanted AI system behaviours, or a charter of principles including provisions for AI-driven feedback triage and ethical standards.

    Download the paper here