As Singapore develops its digital economy, a trusted ecosystem is key - one where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. In the global discourse on AI ethics and governance, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point.
On 25 May 2022, IMDA/PDPC launched A.I. Verify - the world’s first AI Governance Testing Framework and Toolkit for companies that wish to demonstrate responsible AI in an objective and verifiable manner. A.I. Verify – currently a Minimum Viable Product (MVP), aims to promote transparency between companies and their stakeholders.
Developers and owners can verify the claimed performance of their AI systems against a set of principles through standardised tests. A.I. Verify packages a set of open-source testing solutions together, including a set of process checks into a Toolkit for convenient self-assessment. The Toolkit will generate reports for developers, management, and business partners, covering major areas affecting AI performance.
10 companies from different sectors and of different scale, have already tested and/or provided feedback. These companies include - AWS, DBS Bank, Google, Meta, Microsoft, Singapore Airlines, NCS (Part of Singtel Group)/Land Transport Authority, Standard Chartered Bank, UCARE.AI, and X0PA.AI.
We welcome organisations to pilot the MVP. Companies participating in the pilot will have the unique opportunity to:
- Gain early access to the MVP and use it to conduct self-testing on their AI systems/models;
- Use MVP-generated reports to demonstrate transparency and build trust with their stakeholders; and
- Help shape an internationally applicable MVP to reflect industry needs and contribute to international standards development.
For background on the development of the MVP, please click here.
Model AI Governance Framework
On 23 January 2019, the PDPC released its first edition of the Model AI Governance Framework (Model Framework) for broader consultation, adoption and feedback. The Model Framework provides detailed and readily-implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. By explaining how AI systems work, building good data accountability practices, and creating open and transparent communication, the Model Framework aims to promote public understanding and trust in technologies.
On 21 January 2020, the PDPC released the second edition of the Model Framework.
|Decisions made by AI should be
EXPLAINABLE, TRANSPARENT & FAIR
|AI systems should be
From Principles to Practice
Internal Governance Structures and Measures
- Clear roles and responsibilities in your organisation
- SOPs to monitor and manage risks
- Staff training
Determining the Level of Human Involvement in AI-augmented Decision-making
- Appropriate degree of human involvement
- Minimise the risk of harm to individuals
- Minimise bias in data and model
- Risk-based approach to measures such as explainability, robustness and regular tuning
Stakeholder Interaction and Communication
- Make AI policies known to users
- Allow users to provide feedback, if possible
- Make communications easy to understand
The second edition includes additional considerations (such as robustness and reproducibility) and refines the original Model Framework for greater relevance and usability. For instance, the section on customer relationship management has been expanded to include considerations on interactions and communications with a broader network of stakeholders. The second edition of the Model Framework continues to take a sector- and technology-agnostic approach that can complement sector-specific requirements and guidelines.
Implementation and Self Assessment Guide for Organisations (ISAGO)
Intended as a companion guide to the Model Framework, ISAGO aims to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.
ISAGO is the result of the collaboration with World Economic Forum's Centre for the Fourth Industrial Revolution to drive further AI and data innovation. ISAGO was developed in close consultation with the industry, with contributions from over 60 organisations.
Access the ISAGO here.
Compendium of Use Cases
Complementing the Model Framework and ISAGO is a Compendium of Use Cases (Compendium) that demonstrates how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework. The Compendium also illustrates how the featured organisations have effectively put in place accountable AI governance practices and benefitted from the use of AI in their line of business. We hope these real-world use cases will inspire other companies to do the same.
Volume 1 features use cases from Callsign, DBS Bank, HSBC, MSD, Ngee Ann Polytechnic, Omada Health, UCARE.AI and Visa Asia Pacific. Access it here.
Volume 2 contains use cases from the City of Darwin (Australia), Google, Microsoft, Taiger as well as special section on how AI Singapore implemented our Model Framework in its 100 Experiments projects with IBM, RenalTeam, Sompo Asia Holdings and VersaFleet. Access it here.
A Guide to Job Redesign in the Age of AI (Guide)
Under the guidance of the Advisory Council of the Ethical Use of AI and Data, the Infocomm Media Development Authority (IMDA) and the PDPC have collaborated with the Lee Kuan Yew Centre for Innovative Cities (LKYCIC), Singapore University of Technology and Design to launch the Singapore’s first guide that helps organisations and employees understand how existing job roles can be redesigned to harness the potential of AI, so that the value of their work is increased.
Launched on 4 December 2020, this Guide provides an industry-agnostic and practical approach to help companies manage AI's impact on employees, and for organisations that are adopting AI to prepare themselves for the digital future.
This Guide provides guidance on practical steps in four areas of job redesign:
Assess the impact of AI on tasks, including whether each task can be automated or augmented by AI or remain in human hands, and decide which jobs can be transformed within an appropriate time frame.
Charting Clear Pathways Between Jobs
Chart task pathways between jobs within an organisation and identify the tasks employees would need to learn to transition from one job to another.
Clearing Barriers to Digital Transformation
Suggest ways to address potential challenges and support employees when implementing AI.
Enabling Effective Communication Between Employers and Employees
Build a shared understanding within the organisation on “why”, “what”, and “how” AI will augment human capabilities and empower employees in their career.
The Guide supports IMDA’s efforts to build a trusted and progressive AI environment that benefits businesses, employees and consumers. For example, the Model Framework guides organisations to deploy AI responsibly and address consumer concerns. Likewise, the Guide encourages organisations to take a human-centric approach to manage the impact of AI adoption by investing in redesigning jobs and reskilling employees.
Access the Guide here and the primer here.
Adoption and Feedback
We encourage organisations to use the Model Framework, ISAGO and Guide for internal discussion and implementation. Trade associations and chambers, professional bodies, and interest groups are welcome to use this document for their discussions, and adapt it for their own use. The way in which businesses employ AI continues to evolve and so will this living document in the form of future editions.
To this end, we welcome organisations to share with us:
- Practical examples that would aid in illustrating section(s) of the Model Framework and Guide; and/or
- Experiences in using the Model Framework, ISAGO and Guide, e.g. how easy it is to implement the measures, how the framework can be better improved, or a helpful case of implementation that we may publish as a use case. Your use cases would continue to inspire more companies to implement AI responsibly.
Please email us at email@example.com.
The Advisory Council on the Ethical Use of AI & Data was first established in 2018 for the purpose of:
(a) advising the Government on ethical, policy and governance issues arising from the use of data-driven technologies in the private sector; and
(b) supporting the Government in providing general guidance to businesses to minimise ethical, governance and sustainability risks, and to mitigate adverse impact on consumers from the use of data-driven technologies.
The members of the Advisory Council are appointed by the Minister for Communications and Information and supported by a Secretariat.
The Advisory Council members for the term 2022-2025 are:
Mr V.K. Rajah SC, Duxton Hill Chambers (Singapore Group Practice), former Judge of Appeal and the Attorney General of Singapore
Members (in alphabetical order)
- Mr Blaise Aguera y Arcas, Engineering Fellow, Google Research
- Ms Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce
- Mr Chia Song Hwee, Deputy CEO, Temasek International
- Mr Andreas Ebert, Worldwide National Technology Officer, Microsoft
- Ms Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee, WEF
- Mr Piyush Gupta, CEO, DBS Group
- Dr Hiroaki Kitano, President & CEO, Sony Computer Science Lab
- Mr Shameek Kundu, Head of Financial Services and Chief Strategy Officer, TruEra Inc
- Mr Li Chun, Group CEO, Lazada and Vice President, Alibaba Group
- Dr Ieva Martinkenaite, Vice President of Telenor Research, Head of AI and Analytics
- Dr Francesca Rossi, Fellow and AI Ethics Global Leader, IBM
- Dr Tan Geok Leng, CEO, AIDA Technologies
- Mr Andrew Wyckoff, Director of the Science, Technology and Innovation Directorate, OECD