We’ve spent quite a bit of time in recent months considering and exploring the use of AI and in particular what steps Boards need to consider.  While generative AI shows great potential to drive innovation, its adoption also has the potential to carry legal and reputational risks that warrant the Board’s oversight. This paper summarizes the risks and mitigation strategies we think Boards should consider as they evaluate these new technologies.


What is Generative AI?

Generative AI uses machine learning to autonomously create new content such as images, videos or text based on example data. Popular examples include AI writing assistants such as Co-Pilot, Chat GPT, Claude 100K and Geminia, and image generators.  


Key Risks to Consider

  • User Data Privacy: User inputs used to further train models could unintentionally expose confidential information if databases are breached or content is inappropriately reused.
  • Biased or Toxic Outputs: Without careful controls, generative AI could propagate biases, disinformation or harmful content based on what it was exposed to in training. 
  • IP Infringement: The vast amounts of public data used to train models increases the risk of copyright infringement in generated outputs. 
  • Regulatory Compliance: Emerging regulations will require transparency, bias mitigation, harm reduction and adequate tracking of generative AI systems to ensure public safety and trustworthiness.


Due Diligence Recommendations

Management should conduct rigorous evaluations of any proposed generative AI applications, considering technical, legal and ethical risk controls. Key diligence steps include:

  • Assess the intended use case and its risks to users, society and your brand; 
  • Evaluate the technical approaches and controls to mitigate risks like bias, inappropriate content and data privacy;
  • Review compliance of the proposed system and data practices with regulations
  • Understand contractual arrangements and liability with technology partners;
  • Confirm sufficient resourcing and governance for ongoing risk monitoring.


Practical Steps:

Use Case Evaluation

  • Clearly define the business objectives and expected benefits of each proposed use case.
  • Quantify the potential impacts, both positive and negative, on key stakeholders like customers, employees, partners etc.  
  • Consider alternative approaches that may achieve the goals with less risk exposure.


Examples of Use Cases:

  • Content Creation: Automating the generation of articles, reports, and marketing copy.
  • Design and Development: Assisting in the creation of digital designs, product prototypes, and user interfaces.
  •  Customer Experience: Personalizing interactions and improving customer service with chatbots and virtual assistants.
  •  Data Analytics: Enhancing data analysis capabilities for better insights and decision-making.

 

Technical Risk Assessment

  • Evaluate the technical descriptions and safeguards provided by solution partners.
  • Conduct independent reviews or audits of key risk controls like bias detection, inappropriate content filtering, data privacy. 
  • Ensure change management processes are in place as the technology continually improves.


Legal and Regulatory Analysis

  • Map requirements from all applicable laws and regulations in all relevant jurisdictions where you do business.
  • Review contracts for compliance with privacy, IP, and other legal obligations.
  • Stay apprised of regulatory developments through advisory resources.


Governance Framework

  • Establish clear accountabilities for ongoing risk management and compliance. 
  • Implement review and approval processes managed by a cross-functional committee.
  • Regularly report metrics and issues to the Board along with risk response plans.


Addressing Societal Impacts

  • Engage with external stakeholders to understand societal concerns.  
  • Consider opportunities for the technology to have a positive societal impact.
  • Communicate openly about risks and controls to build public understanding and trust.


Continuous Improvement  

  • Use issue tracking and external research to identify new risks and strengthen controls.
  • Conduct periodic independent audits or assessments of the governance practices.
  • Require management to refine processes based on lessons learned from incidents.


This comprehensive, proactive approach will help ensure generative AI is developed and applied responsibly at your company. With proactive risk management practices in place, generative AI could unlock new opportunities while avoiding legal, regulatory or reputational exposure for the company. 

 

You can find more articles on our website, at Phundex Knowledge Hubon LinkedIn at Phundex LinkedIn, or for other questions, please email us at:  hello@phundex.com.


To book a demo or do a trial, you can either use the link on our website or email support@phundex.com, and they will be happy to set it up for you.