A technical report which addresses issues of trust in Artificial Intelligence has recently been published by the International Organisation for Standardization (ISO) and the International Electrotechnical Commission (IEC).
ISO/IEC TR 24028:2020, “Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence”, analyses the factors that can impact the trustworthiness of systems providing or using AI.
The exciting new document can be used by any business regardless of its size or sector and follows on from work carried out at an International Plenary meeting hosted by NSAI in Dublin in 2019. During the three-day forum, ‘Trustworthiness in AI’ was the focus of intense debate by some of the world’s foremost technological experts who were gathered to advance work on the world’s first ever standards in AI.
AI is now part of our daily lives and will become more prevalent in the years to come. E-mail spam filtering, Google’s search predictions and voice recognition software such as Siri, are all examples of AI in everyday life. What these technologies have in common is machine-learning algorithms that enable them to react and respond in real time.
Trustworthy AI systems must be fair, safe and accountable. They must respect individual privacy, must not discriminate and should be transparent and accountable. ISO/IEC TR 24028 examines the existing approaches that can support or improve trustworthiness in technical systems and discusses their potential application to AI. It also discusses possible approaches to mitigating AI system vulnerabilities and ways to improving their trustworthiness.
In addition to providing clearer guidance on trustworthiness and how it is being embedded in IT systems, ISO/IEC TR 24028 will help the standards community to better understand and identify the specific standardization gaps in AI and, importantly, how to address these through future standards work.