25 Oct 2023 3 min read

How we’ll press for safe AI

By John Hoeppner

We're engaging with companies on our baseline expectations on what is likely to prove a generation-defining issue.

Safe_AI.jpg

The following is an extract from our latest CIO Outlook.

You don’t need to have seen The Terminator to recognise that in addition to having the potential to alter dramatically the way we live, work and play, AI also raises the prospect of cataclysmic risks for humanity.

Within LGIM’s Investment Stewardship team, we are optimists. In our view, AI should drive long-term innovation, productivity and value creation, as outlined elsewhere in this document. To secure these gains, we believe investors must engage with companies and policymakers on baseline expectations for governance, risk management and transparency.

Nascent regulations

To name but a few, AI entails risks around data privacy and security; regulatory compliance; operational and critical infrastructure; workforce transitions; intellectual property; reputations; and trust in the information environment. There are also a host of ethical concerns as to its application.

Governments around the globe, as a result, are considering regulations aimed at facilitating the safe development and deployment of AI.[1]

Safe_AI1.png

As AI progresses, policymakers, data providers, corporate advisers, NGOs and civil society will all have important roles to play. We anticipate a cycle of definition, measurement, assessment and risk management.

Where practical, we will take part in public consultations. We will also encourage the companies in which we invest on behalf of our clients to be transparent and to participate in good faith. And we intend to play a key role in helping data providers and other stakeholders acquire comprehensive and actionable data.

Our expectations

AI and the broader trend toward digitisation make up one of our six priority stewardship themes. With all such themes, we focus our influence on raising minimum standards through targeted action across market leverage points.

To this end, we are tiering our approach between companies that make AI systems and those that use the technology. The former group will have more AI-related liabilities, so will receive more of our scrutiny.

We outline below our current baseline expectations of companies, to which we suggest they dedicate resources in proportion to their risk exposures and business models.

Governance

  • Name a board member or committee accountable for AI risk oversight and strategy
  • Provide board education of business-specific AI risks at least annually. Consider utilising external expert groups to keep up to date

Risk management

  • Conduct product safety risk assessments across the business cycle, including on human rights
    • This should include upstream and downstream considerations; for example, over data and clients
    • Companies exposed to high-risk AI systems should consider third-party assessments to supplement internal assessments
  • Ensure AI systems are explainable, meaning the board and relevant business functions can describe inputs, processes and outputs
    • Establishing baseline understanding is critical for ongoing risk assessment and broader trust building
  • Identify high-risk AI systems or inputs and describe current or future mitigation efforts
  • Build trust by soliciting input on high-risk AI systems from third-party groups and civil society
  • Provide reasonable paths to give feedback or seek remediation if AI systems cause harm

Transparency

  • Disclose governance policies and risk processes on a regular basis
  • Make it clear to customers or civil society when AI systems are used in services

These expectations will evolve over time. Initially, it will be difficult for us to assess if companies are meeting them, as disclosure is limited and data providers are still working through the relevant metrics. So, we hope to spur action by being clear and public about what we are seeking.

Should companies fail to meet our expectations, we will escalate our engagement on behalf of our clients, on what may well prove to be a generation-defining issue.

The above is an extract from our latest CIO Outlook.

 

[1] UK - https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach/white-paper

US - https://www.whitehouse.gov/briefing-room/statements-releases/2023/09/12/fact-sheet-biden-harris-administration-secures-voluntarycommitments-from-eight-additional-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/

EU - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

Global - https://www.reuters.com/world/g7-calls-developing-global-technical-standards-ai-2023-05-20/

 

John Hoeppner

Head of US Stewardship and Sustainable Investments

John joined LGIMA in 2018 as Head of US Stewardship and Sustainable Investments. He is the US representative of the Investment Stewardship team. John is charged with shaping the firm's corporate engagements and driving demand for sustainable investing strategies in the US market. He joined from Mission Measurement where he led the Impact Investing practice, and launched an ESG data and consulting business. Prior, John held multiple senior product positions in the asset management divisions of UBS and Northern Trust. John championed a range of corporate and product related sustainable investment efforts. He started his investment career at Cambridge Associates on the capital markets research team. John earned a Bachelor of Commerce from McGill University in Montreal, Canada.

John Hoeppner