As Singapore develops its digital economy, a trusted ecosystem is key - one where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. In the global discourse on AI ethics and governance, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point.
On 23 January 2019, the PDPC released its first edition of the Model AI Governance Framework (Model Framework) for broader consultation, adoption and feedback. The Model Framework provides detailed and readily-implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. By explaining how AI systems work, building good data accountability practices, and creating open and transparent communication, the Model Framework aims to promote public understanding and trust in technologies.
On 21 January 2020, the PDPC released the second edition of the Model Framework.
|Decisions made by AI should be
EXPLAINABLE, TRANSPARENT & FAIR
|AI systems should be
From Principles to Practice
Internal Governance Structures and Measures
- Clear roles and responsibilities in your organisation
- SOPs to monitor and manage risks
- Staff training
Determining the Level of Human Involvement in AI-augmented Decision-making
- Appropriate degree of human involvement
- Minimise the risk of harm to individuals
- Minimise bias in data and model
- Risk-based approach to measures such as explainability, robustness and regular tuning
Stakeholder Interaction and Communication
- Make AI policies known to users
- Allow users to provide feedback, if possible
- Make communications easy to understand
The second edition includes additional considerations (such as robustness and reproducibility) and refines the original Model Framework for greater relevance and usability. For instance, the section on customer relationship management has been expanded to include considerations on interactions and communications with a broader network of stakeholders. The second edition of the Model Framework continues to take a sector- and technology-agnostic approach that can complement sector-specific requirements and guidelines.
Implementation and Self Assessment Guide for Organisations (ISAGO)
Intended as a companion guide to the Model Framework, ISAGO aims to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.
ISAGO is the result of the collaboration with World Economic Forum's Centre for the Fourth Industrial Revolution to drive further AI and data innovation. ISAGO was developed in close consultation with the industry, with contributions from over 60 organisations.
Access the ISAGO here.
Compendium of Use Cases
Complementing the Model Framework and ISAGO is a Compendium of Use Cases (Compendium) that demonstrates how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework. The Compendium also illustrates how the featured organisations have effectively put in place accountable AI governance practices and benefitted from the use of AI in their line of business.Access the Compendium here.
Adoption and Feedback
We encourage organisations to use the Model Framework and ISAGO for internal discussion and implementation. Trade associations and chambers, professional bodies, and interest groups are welcome to use this document for their discussions, and adapt it for their own use.
The way in which businesses employ AI continues to evolve and so will this living document in the form of future editions.
To this end, we welcome organisations to share with us:
- Practical examples that would aid in illustrating section(s) of the Model Framework; and/or
- Experiences in using the Model Framework and ISAGO, e.g. how easy it is to implement the measures, how the framework can be better improved, or a helpful case of implementation that we may publish as a use case.
Please email us at firstname.lastname@example.org.