Artificial Intelligence Governance Series: AI Governance Challenges with External Systems and Data
- samdelucia
- May 8, 2024
- 3 min read
Governing AI when the systems primary (or source) data, or outputs are external presents unique challenges that can be overcome with a little advance planning. As with any AI environment, your policies, procedures, and frameworks are developed to help ensure the responsible and ethical use of AI technologies. Here are some strategies for governing AI in such scenarios:
Vendor Selection and Due Diligence: When using external AI tools or systems, it's essential to conduct thorough vendor evaluations to assess their reputation, reliability, and adherence to ethical standards. Carefully examine several years' worth of the external reviews and audits (such as SOC reports) that have been completed. Evaluate their documented data privacy practices, security measures, and ongoing regulatory compliance. Get full details on past data losses or downtimes, including how they’ve handled these situations.
Contractual Agreements and Service Level Agreements (SLAs): Establish clear contractual agreements with external AI vendors outlining data usage, ownership rights, confidentiality clauses, and liability responsibilities. SLAs should specify performance metrics, service availability, and escalation procedures to ensure accountability and quality of service.
Data Governance and Privacy: Implement robust data governance policies to manage the flow of data, both internally and externally, ensuring compliance with data privacy regulations such as GDPR, CCPA, or industry-specific standards. Define procedures for data sharing, anonymization, encryption, and access control to protect sensitive information. (Remember: Once a contract is in place, it’s important to periodically verify that those policies, covenants, and SLAs are being carefully followed.)
Ethical Guidelines and Standards: Develop ethical guidelines and standards for AI usage, just as you would if the AI tools or information were all internally managed. Establish principles for fairness, transparency, accountability, and bias mitigation to govern the development, deployment, and use of AI technologies.
Risk Assessment and Mitigation: Conduct assessments to identify potential risks associated with external AI tools or generated information, including security vulnerabilities, data breaches, algorithmic biases, and regulatory non-compliance. Implement risk mitigation measures such as cybersecurity protocols, and audit trails. In addition, with AI, the ability to consistently trust the output is of core importance and can be a major risk. For this reason, select and regularly check using one or more generally accepted global and local explainability techniques.
Continuous Monitoring and Evaluation: Continuously monitor the performance and impact of external AI tools or information to detect any deviations from expected outcomes or ethical standards. Establish mechanisms for feedback, auditing, and regular review to ensure alignment with organizational objectives and ethical guidelines.
Training and Awareness Programs: As the field evolves and matures, new knowledge comes along quickly and it’s easy for people to fall behind. Provide training and awareness programs to educate employees about the responsible use of AI, including the implications of using external AI tools or information. Foster a culture of continuous learning, ethical decision-making, critical thinking, and compliance with governance policies across the organization.
Stakeholder Engagement and Transparency: Engage stakeholders, including employees, customers, regulators, and the broader community, in discussions about AI governance and its implications. Foster transparency by disclosing the use of external AI tools or information, along with the associated risks and safeguards implemented to mitigate them.

Regulatory Compliance: Stay informed about rapidly evolving regulatory requirements related to AI governance, data privacy, and cybersecurity. Ensure compliance with applicable laws and regulations governing AI usage, data protection, intellectual property rights, and industry-specific standards.
By implementing these governance strategies, organizations can effectively manage the risks and opportunities associated with utilizing external AI tools or generated information while upholding ethical standards, regulatory compliance, and stakeholder trust.
Getting started is easy – Ask Cobalt Shields how to get started below…
Comments