Published

17

Dec

2020

Podcast

Critical Literacy for AI Governance Modeling: Growing Digital Ethics in Practice

As the lead creator of one of the first commercially available algorithmic tools to identify & mitigate bias in AI systems, Accenture’s Fairness Tool, Rumman is perhaps the best person to describe why algorithmic approaches aren’t enough. Over the past three years, Rumman and her team have been increasingly focused on the importance of developing the right critical thinking skills to interrogate the specifics of AI governance models.

Critical Literacy for AI Governance Modeling: Growing Digital Ethics in Practice

There are many tools and frameworks available for digital ethics governance, so the hard work is not making a new one, but choosing which one to apply. This contextual decision doesn’t require the quantitative skills of data science alone, but also the contextual and critical thinking skills that other disciplines can bring to the table. Rumman recommended Tom Mackey and Trudi Jacobson’s work on ‘Metaliteracy’ as a useful toolkit to help all kinds of specialists navigate a digital age that requires us to be fluent not only in our own areas of expertise but a variety of digital domains.

In keeping with guests saying in prior episodes that getting the terminology agreed from the start is critical to future success, Rumman unpacks the term ‘governance’. A data scientist might be referring to model governance, the technical specifications that define how a model works & what to do when it goes wrong, where others might be speaking at the level of organizational governance mechanisms like ethics boards or review processes. There are also regulatory mechanisms for governance operating at the industry and national level, as well as a whole ecosystem of civic society pressure organizations which is successfully influencing both actual legislation and reputational factors that impact how organizations are doing business.

Rumman also recommends several organizational mechanisms to support better governance approaches to AI ethics: most importantly, being aware of who has the power to make & influence decisions over digital projects. Interdisciplinary approaches are critical to success, but often difficult to enact in practice. She points to the emergent mechanisms within cybersecurity as a particular success: using ‘red teams’ with many different specialisms to identify and address problems. Second, having a process for critical conversations to take place is key. This can be in the format of an ombudsman office whose function is to independently represent the interests of various stakeholders, a review board, or an open-door/town hall policy. But Rumman says for real change to work, a no-blame culture is essential: consider medical ethics review boards, or flight safety, where the focus is on limiting future damage through open and transparent conversations about what went wrong, rather than on identifying & punishing bad actors.

Rumman’s work on Putting Responsible AI into Practice is available to read in Sloane Management Review. You can learn more about Rumman’s work on responsible AI at her website.

Subscribe to LEF podcasts on AppleSpotify, or Google.

 

Cookies

We use cookies to improve the user experience of our website. If you continue to browse, we'll assume you are happy for your web browser to receive all cookies from our website. See our cookies policy for more information on cookies and how to manage them.

Accept

Sections





Themes

Future thinking & strategy





Re-imagining business








Leadership & capabilities





Type