Trustworthy AI

Summary

Trustworthy AI is a programme of work of the ITU (United Nations Specialized Agency for ICT) under its AI for Good programme. The programme advances the standardization of a number of Privacy-enhancing technologies (PETs), including homomorphic encryption, federated learning, secure multi-party computation, differential privacy, zero-knowledge proof.[1][2]

Trustworthy AI
AbbreviationTwAI
Formation2020
TypeSDO
Legal statusActive
Parent organization
ITU
Websiteitu.int/go/TrustworthyAI

Privacy-Enhancing Technologies apply complex and sometimes counterintuitive operations to process signals and information while safeguarding privacy. For instance, homomorphic encryption allows for computing on encrypted data, where the outcomes or result is still encrypted and unknown to those performing the computation, but decryptable by the original encryptor. These technologies are often developed with the goal of enabling use in jurisdictions different from the data creation (under e.g. GDPR). As such, the programme being led by two international organizations develops international standards to operate in this context.[3][4] The PETs are used from analytics such as Artificial Intelligence.

History edit

The origin of the programme lies with the ITU-WHO Focus Group on Artificial Intelligence for Health, where strong need for privacy at the same time as the need for analytics, created a demand for a standard in these technologies.

When AI for Good moved online in 2020, the TrustworthyAI seminar series was initiated to start discussions on such work, which eventually led to the standardization activities.[5]

Standardization edit

Multi-Party Computation edit

Secure Multi-Party Computation (MPC) is being standardizated under "Question 5" (the incubator) of ITU-T Study Group 17.[6]

Homomorphic Encryption edit

ITU has been collaborating since the early stage of the HomomorphicEncryption.org standardization meetings, which has developed a standard on Homomorphic encryption. The 5th homomorphic encryption meeting was hosted at ITU HQ in Geneva.

Federated Learning edit

Zero-sum masks as used by federated learning for privacy preservation are used extensively in the multimedia standards of ITU-T Study Group 16 (VCEG) such as JPEG, MP3, and H.264, H.265 (aka MPEG).

Zero-knowledge proof edit

Previous pre-standardization work on the topic of zero-knowledge proof has been conducted in the ITU-T Focus Group on Digital Ledger Technologies.

Differential privacy edit

The application of differential privacy in the preservation of privacy was examined at several of the "Day 0" machine learning workshops at AI for Good Global Summits.

See also edit

References edit

  1. ^ "Advancing Trustworthy AI - US Government". National Artificial Intelligence Initiative. Retrieved 2022-10-24.
  2. ^ "TrustworthyAI". ITU. Archived from the original on 2022-10-24. Retrieved 2022-10-24.
      This article incorporates text from this source, which is by the International Telecommunication Union available under the CC BY 4.0 license.
  3. ^ Ammanath, Beena (2022-03-22). Trustworthy AI: A Business Guide for Navigating Trust and Ethics in AI. John Wiley & Sons. ISBN 978-1-119-86792-0.
  4. ^ Heintz, Fredrik; Milano, Michela; O'Sullivan, Barry (2021-04-13). Trustworthy AI - Integrating Learning, Optimization and Reasoning: First International Workshop, TAILOR 2020, Virtual Event, September 4–5, 2020, Revised Selected Papers. Springer International Publishing. ISBN 978-3-030-73958-4.
  5. ^ "TrustworthyAI Seminar Series". AI for Good. Retrieved 2022-10-24.
  6. ^ Shulman, R.; Greene, R.; Glynne, P. (2006-03-21). "Does implementation of a computerised, decision-supported intensive insulin protocol achieve tight glycaemic control? A prospective observational study". Critical Care. 10 (1): P256. doi:10.1186/cc4603. ISSN 1364-8535. PMC 4092631.