Artificial Intelligence (AI) and other frontier technologies ultimately have to address real business problems. That means governance practices should do so as well. In my efforts to support businesses grappling with these issues, I have seen six quick wins that have made a positive difference in tackling governance while solving problems.
Background: Without a doubt, AI offers organisations a compelling opportunity to capture market share, innovate to delight customers, personalise service to realise strong engagement and uplift the accuracy and performance of internal operations. Until now, the focus on responsible use of AI has centred on big tech. But as the opportunity to leverage data-centric technology explodes across broader sections of the economy and governmental services, so too does societal expectation that these tools are used with trust at the centre.
The past five years have witnessed great strides in understanding the promise and the risks inherent with the use of frontier and predictive technologies. That’s had a positive impact as organisations increasingly create principles and frameworks and look to establish new organisational answers to thorny ethics questions. Despite this, leaders are still relatively new to translating high-level goals into nudges and interventions that address the realities of their team members’ experience harnessing this technology. That’s not something we can address with a single leadership body: the breadth and scope of projects using predictive technology demands delegation of these decisions away from executive and board, through to the organisation. It’s not enough to announce a set of general ambitions and assume adoption will take care of itself.
While organisations get a lot right in their work to shift their team to a culture of ethical use, focusing on solving for today’s real-world, as-is, will realise a swifter shift to positive outcomes.
Here are my top six quick wins for organisations struggling to move from principles to practice.
Why: When I onboard a client, I always discuss their challenges. The number one insight I get is “these principles we have are all well and good, but how do we apply them?”
In an echo of corporate values statements from the 2000’s, principles frameworks and codes of conduct for responsible use of frontier tech can earn a reputation for being undeniably positive, but ultimately not particularly instructive or helpful. Applying codes, principles, virtues and other goals to specific situations is the number one challenge for any organisation embracing responsible use. Accountability, transparency, fairness and other principles only happen when people defining things on the floor – on appropriate use, safeguards, design choices, data use, etc. – are enabled and empowered to make intentional decisions.
What: Pair any activities to create or communicate tech ethics principles and responsible use frameworks with interventions to have these USEFULLY discussed and utilised during key project decision points. That means setting clear red lines at the top, but also providing guidance and an expectation that project teams make fully informed decisions to manage risks and threats in less clear-cut situations. Ensure they are effective by providing them with the ability, bandwidth and authority to make those intentional, accountable decisions. Confirm that decisions do mitigate those risks by ensuring they reach implementation, and their effectiveness gets reviewed.
Example: An organisation’s HR team has the option to use video-interview technology combined with AI to upgrade their recruitment. They have been trained to understand risks for negative outcomes related to bias in hiring from concept through to live operations tools, and embrace their role to take considered decisions on appropriate use, constraints and safeguards as they progress the project. They use the company’s Ethics Principles as tools to guide these discussions.
Why: Principles and roadmaps can sometimes get stuck in high level language and lack specificity. Translating these into an Enterprise Risk Management (ERM) lens can result in untameable risks that defy inspection and avoid mitigation.
The complexity of technology projects, especially those using multiple third parties, sets a pace and pressure that can spare little resource for additional activities. The easier it is for a project team and stakeholders to understand the relevance of risks, the better able they will be to identify suitable responses. That prevents continual escalations seeking senior management rulings or a risk-averse approach that avoids trouble but hinders innovation.
What: Ensure and require that risks are actively reviewed for local relevance and impact when cascading from Enterprise Risk to department, team and project level risks. Never let projects take on generalised, non-specific risks copy-pasted from principles frameworks. Embed ethics-training staff or external expertise in risk workshops or bow-tie sessions to help sharpen language and ensure relevance that speaks to the team as ethics risks are cascaded.
Example: The head of ERM has identified “negative brand damage through perception of discrimination in AI use” as an enterprise risk. The marketing team has held risk workshops and identified strong relevance for their use of technology in loyalty programs. In turn the team for the Hyper-Personalisation pilot project take this and identify risks from the third party product they are considering utilising customer purchasing data to predict age, ethnicity, gender.
Why: It’s tempting to plug gaps for new ethics-centric risks with a highly engineered governance framework that answers all questions: documents packed with the minutiae of governance, detailed RASCIs on all roles, meetings agendas, forums, documents and activities. For many organisations, that’s a level of standardisation that doesn’t exist for most other areas of governance. A lack of flexibility and an engineer who is incentivized to build the perfect machine means it will end up as shelf-ware. Creating a rigid structure increases likely resistance and a “not invented here” rejection. Even if actually used, it encourages a tick-box mentality, rather than embracing the goals of responsible-use.
What: Pair objective-focused controls with flexible guidance and practical tools for actors in the organisation. This allows teams to easily accommodate new requirements into their existing work whilst meeting required standards. Providing options and suggestions for success in addressing risks of negative outcomes, rather than tick-box activities, increases involvement in solutioning, reinforcing a companywide culture of Responsible AI. Make clear the standard that needs to be met, but don’t dictate the method to achieve this unless you really must.
Example: An organisation is investigating how best to enable the adoption of the agreed AI Ethics Principles and Code of Conduct throughout the organisation. They provide a set of online/offline collaboration tools that project, data science, procurement and data/privacy risk teams can use to understand the controls’ relevance and identify appropriate actions. These are rolled out with practical hands-on training and a minimum of governance overhead in reporting.
Why: We’re used to the idea of IT as engineering: we specify requirements and the engineers code it to give us what we want. It all feels like pocket calculators: it either works or it’s broken. The problem is Data Science behaves more like an experiment: we can change the inputs, but we might get some surprising, or disappointing, outcomes. That means when we set out a list of requirements for addressing responsible-use concerns, we need to understand the potential trade-offs.
In most use cases, the required knowledge to navigate and understand the trade-offs and dilemmas of using predictive technology resides across the data-science community, business subject matter experts and voices from impacted groups. Siloed data science expertise that looks inwards means you get a business that doesn’t fully understand the risks or the opportunities. That can lead to negative performance and poor outcomes despite the business’ best effort to do the right thing.
What:
Example: After a few close calls where business teams implemented third party AI technology without understanding the impact, the Data Science Team agrees to provide a partnership approach for all relevant projects. They ensure projects with predictive technology always have a named data science resource assigned, set levels of understanding required during project stages and provide structures in which the project team could discuss concerns and prototype possible options.
Why: Responsible use stems from decisions on what is appropriate, acceptable, and preferable. Sound, defensible decisions rely on information feeding the assessment. In any decision-making situation the breadth of views providing that information is just as important as its accuracy. Even with the best of intent, a room full of similar people, performing similar roles, and from similar backgrounds are going to struggle to fully capture the breadth of opinions, views and experiences on a use case for technology. Diversity in the voices heard during the decision-making process increases accuracy and the likelihood that the decision will be a positive one.
What: Prepare and pre-skill an informal ‘forum” of voices to bring diversity and viewpoints into decisions. Ensure coverage for business goals, risk, reputation, data science, employee voice, legal/regulatory, ethics, privacy, strategy. Leverage existing channels for understanding impacted stakeholder groups outside the organisation, especially vulnerable and disadvantaged communities. Use these resources appropriately, scale engagement depending on the issue being addressed and leverage the decision tools described to ensure these forums operate effectively and with minimum overhead.
Example: An organisation recognises the need to gather information and perspectives before making judgment calls on how technology is harnessed for individual use cases. Rather than expecting each project team to work things out for themselves they opt to select representatives across internal teams, and to utilise their representation for vulnerable and discriminated groups. They create a discussion forum where project leads can table key decision points and receive input and recommendations from the forum. Where requested, members of the forum actively participate in forming the decision on key challenges for a particular use case. The forum members have been given training and familiarisation to their role.
Why: Regardless of any efforts to encourage ethical-use though training, communications, support, governance, targets and frameworks, adoption occurs only when the majority of the organisation recognises a trusted approach as the path to personal and organisational success. That requires a team that understands what needs doing, is able and empowered to do so, and most importantly believes this is the right thing to do.
What: Communicate, demonstrate, reinforce.
Communicate: Give clear information that explains what you are doing, why it matters and what is expected to change on a personal and organisational level.
Demonstrate: As part of the program, give visibility into adoption at senior levels. Leaders should note the challenge and that they will sometimes lack of simple answers. Demonstrate that you are taking this seriously. Focus on the long and short. Long term vision and short-term achievable outcomes.
Reinforce: Make doing the right thing the thing that is successful. Recognise and reward.
Example: Despite creating a Trusted-Use Code of Conduct and creating an Ethics Board, a leadership team understands that project teams still view responsible use as a “nice to have.” They prioritise expertise for a transformation approach that is effective to implementing the Code of Conduct over the creation of more artefacts and roles.
Translating high level goals and principles into practical assistance for team-members facing trade-off decisions is the key to success in responsible use. It can be tempting to roll out a highly engineered approach to enablement, but winning strategies tend to focus more on ease of use, practicality, empowerment of the teams and promoting a culture of ethical use through leadership support.
Written By Matthew Newman. Matthew Newman is a Global Expert in the Organizational Implementation of AI Ethics and an Advisor with The Cantellus Group.
These blogs by TCG Advisors express their views and insights. The strength and beauty of our team is that we encompass many opinions and perspectives, some of which will align, and some which may not. These pieces are selected for their thoughtfulness, clarity, and humor. We hope you enjoy them and that they start conversations!