As Singapore develops its digital economy, a trusted ecosystem is key - one where organisations can benefit from tech innovations while consumers are confident to adopt and use AI. In the global discourse on AI ethics and governance, Singapore believes that its balanced approach can facilitate innovation, safeguard consumer interests, and serve as a common global reference point.
IMDA had developed AI Verify, an AI governance testing framework and a software toolkit. The testing framework consists of 11 AI ethics principles* which jurisdictions around the world coalesce around, and are consistent with internationally recognised AI frameworks such as those from EU, OECD and Singapore’s Model AI Governance Framework. AI Verify helps organisations validate the performance of their AI systems against these principles through standardised tests.
|*The 11 governance principles are transparency, explainability, repeatability/reproducibility, safety, security, robustness, fairness, data governance, accountability, human agency and oversight, inclusive growth, societal and environmental well-being.
The testing processes comprises technical tests and process checks. The AI Verify toolkit is a single integrated software toolkit that operates within the user’s enterprise environment. It enables users to conduct technical tests on their AI models and record process checks. The toolkit then generates testing reports for the AI model under test. User companies can be more transparent about their AI by sharing these testing reports with their shareholders.
AI Verify can currently perform technical tests on common supervised-learning classification and regression models for most tabular and image datasets. AI Verify cannot test Generative AI/LLMs. AI Verify does not set ethical standards, neither does it guarantee AI systems tested will be completely safe or be free from risks or biases.
AI Verify was first developed in consultation with companies from different sectors and of different scales. These companies include AWS, DBS, Google, Meta, Microsoft, Singapore Airlines, NCS/LTA, Standard Chartered, UCARE.AI and X0PA. AI Verify was subsequently released in May 2022 for an international pilot, which attracted the interest of over 50 local and multinational companies, including Dell, Hitachi and IBM.
As AI testing technologies are still nascent and growing, there is a need to crowd in the best expertise globally to advance in this area. IMDA has therefore set up the AI Verify Foundation to harness the collective power and contributions of the global open source community to develop AI Verify testing tools for the responsible use of AI. The Foundation will boost AI testing capabilities and assurance to meet the needs of companies and regulators globally.
The not-for-profit Foundation will:
- Foster a community to contribute to the use and development of AI testing frameworks, code base, standards, and best practices
- Create a neutral platform for open collaboration and idea-sharing on testing and governing AI
- Nurture a network of advocates for AI and drive broad adoption of AI testing through education and outreach
AI Verify Foundation has seven premier members, namely, Aicadium, Google, IBM, IMDA, Microsoft, Red Hat and Salesforce, who will set strategic directions and development roadmap of AI Verify. The Foundation also has more than 60 general members. For more information, visit our Foundation website.
Model AI Governance Framework
On 23 January 2019, the PDPC released its first edition of the Model AI Governance Framework (Model Framework) for broader consultation, adoption and feedback. The Model Framework provides detailed and readily-implementable guidance to private sector organisations to address key ethical and governance issues when deploying AI solutions. By explaining how AI systems work, building good data accountability practices, and creating open and transparent communication, the Model Framework aims to promote public understanding and trust in technologies.
On 21 January 2020, the PDPC released the second edition of the Model Framework.
|Decisions made by AI should be
EXPLAINABLE, TRANSPARENT & FAIR
|AI systems should be
From Principles to Practice
Internal Governance Structures and Measures
- Clear roles and responsibilities in your organisation
- SOPs to monitor and manage risks
- Staff training
Determining the Level of Human Involvement in AI-augmented Decision-making
- Appropriate degree of human involvement
- Minimise the risk of harm to individuals
- Minimise bias in data and model
- Risk-based approach to measures such as explainability, robustness and regular tuning
Stakeholder Interaction and Communication
- Make AI policies known to users
- Allow users to provide feedback, if possible
- Make communications easy to understand
The second edition includes additional considerations (such as robustness and reproducibility) and refines the original Model Framework for greater relevance and usability. For instance, the section on customer relationship management has been expanded to include considerations on interactions and communications with a broader network of stakeholders. The second edition of the Model Framework continues to take a sector- and technology-agnostic approach that can complement sector-specific requirements and guidelines.
Implementation and Self Assessment Guide for Organisations (ISAGO)
Intended as a companion guide to the Model Framework, ISAGO aims to help organisations assess the alignment of their AI governance practices with the Model Framework. It also provides an extensive list of useful industry examples and practices to help organisations implement the Model Framework.
ISAGO is the result of the collaboration with World Economic Forum's Centre for the Fourth Industrial Revolution to drive further AI and data innovation. ISAGO was developed in close consultation with the industry, with contributions from over 60 organisations.
Access the ISAGO.
Compendium of Use Cases
Complementing the Model Framework and ISAGO is a Compendium of Use Cases (Compendium) that demonstrates how local and international organisations across different sectors and sizes implemented or aligned their AI governance practices with all sections of the Model Framework. The Compendium also illustrates how the featured organisations have effectively put in place accountable AI governance practices and benefitted from the use of AI in their line of business. We hope these real-world use cases will inspire other companies to do the same.
Volume 1 features use cases from Callsign, DBS Bank, HSBC, MSD, Ngee Ann Polytechnic, Omada Health, UCARE.AI and Visa Asia Pacific.
Volume 2 contains use cases from the City of Darwin (Australia), Google, Microsoft, Taiger as well as special section on how AI Singapore implemented our Model Framework in its 100 Experiments projects with IBM, RenalTeam, Sompo Asia Holdings and VersaFleet.
A Guide to Job Redesign in the Age of AI (Guide)
Under the guidance of the Advisory Council of the Ethical Use of AI and Data, the Infocomm Media Development Authority (IMDA) and the PDPC have collaborated with the Lee Kuan Yew Centre for Innovative Cities (LKYCIC), Singapore University of Technology and Design to launch the Singapore’s first guide that helps organisations and employees understand how existing job roles can be redesigned to harness the potential of AI, so that the value of their work is increased.
Launched on 4 December 2020, this Guide provides an industry-agnostic and practical approach to help companies manage AI's impact on employees, and for organisations that are adopting AI to prepare themselves for the digital future.
This Guide provides guidance on practical steps in four areas of job redesign:
Assess the impact of AI on tasks, including whether each task can be automated or augmented by AI or remain in human hands, and decide which jobs can be transformed within an appropriate time frame.
Charting Clear Pathways Between Jobs
Chart task pathways between jobs within an organisation and identify the tasks employees would need to learn to transition from one job to another.
Clearing Barriers to Digital Transformation
Suggest ways to address potential challenges and support employees when implementing AI.
Enabling Effective Communication Between Employers and Employees
Build a shared understanding within the organisation on “why”, “what”, and “how” AI will augment human capabilities and empower employees in their career.
The Guide supports IMDA’s efforts to build a trusted and progressive AI environment that benefits businesses, employees and consumers. For example, the Model Framework guides organisations to deploy AI responsibly and address consumer concerns. Likewise, the Guide encourages organisations to take a human-centric approach to manage the impact of AI adoption by investing in redesigning jobs and reskilling employees.
Access the Guide and the primer.
Adoption and Feedback
We encourage organisations to use the Model Framework, ISAGO and Guide for internal discussion and implementation. Trade associations and chambers, professional bodies, and interest groups are welcome to use this document for their discussions, and adapt it for their own use. The way in which businesses employ AI continues to evolve and so will this living document in the form of future editions.
To this end, we welcome organisations to share with us:
- Practical examples that would aid in illustrating section(s) of the Model Framework and Guide; and/or
- Experiences in using the Model Framework, ISAGO and Guide, e.g. how easy it is to implement the measures, how the framework can be better improved, or a helpful case of implementation that we may publish as a use case. Your use cases would continue to inspire more companies to implement AI responsibly.
Please email us at firstname.lastname@example.org.
The Advisory Council on the Ethical Use of AI & Data was first established in 2018 for the purpose of:
(a) advising the Government on ethical, policy and governance issues arising from the use of data-driven technologies in the private sector; and
(b) supporting the Government in providing general guidance to businesses to minimise ethical, governance and sustainability risks, and to mitigate adverse impact on consumers from the use of data-driven technologies.
The members of the Advisory Council are appointed by the Minister for Communications and Information and supported by a Secretariat.
The Advisory Council members for the term 2022-2025 are:
Mr V.K. Rajah SC, Duxton Hill Chambers (Singapore Group Practice), former Judge of Appeal and the Attorney General of Singapore
Members (in alphabetical order)
- Mr Blaise Aguera y Arcas, Engineering Fellow, Google Research
- Ms Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce
- Mr Chia Song Hwee, Deputy CEO, Temasek International
- Mr Andreas Ebert, Worldwide National Technology Officer, Microsoft
- Ms Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee, WEF
- Mr Piyush Gupta, CEO, DBS Group
- Dr Hiroaki Kitano, President & CEO, Sony Computer Science Lab
- Mr Shameek Kundu, Head of Financial Services and Chief Strategy Officer, TruEra Inc
- Mr Li Chun, Group CEO, Lazada and Vice President, Alibaba Group
- Dr Ieva Martinkenaite, Vice President of Telenor Research, Head of AI and Analytics
- Dr Francesca Rossi, Fellow and AI Ethics Global Leader, IBM
- Dr Tan Geok Leng, CEO, AIDA Technologies
- Mr Andrew Wyckoff, Director of the Science, Technology and Innovation Directorate, OECD