Curtin software and engineering researchers have developed a way to assess and inform on the ethical, social, and moral logic of AI and robotic systems, when they are making decisions which could boost their reliability and integrity.
The breakthrough could also make the use of artificial intelligence more acceptable and applicable across multiple areas including the medical sector, legal sector, and mining industry.
Dr Masood Mehmood Khan from Curtin’s School of Civil and Mechanical Engineering has not only invented the framework, called the Accountable eXplainable AI (AXAI), but also proposed a software design process to integrate it into AI systems.
Dr Khan said “Failing to provide measures of comprehensibility and accountability, means most AI and Machine Learning (ML) systems are unable to gain wider public support and approval. At present, these systems cannot explain their decisions and their level of intelligence, expertise and capabilities are also limited because of the built-in biases in the training data and algorithms. For example, if a rescue robot reaches an accident scene and decides to help one person leaving others unattended, questions will be asked on the ethical, social and moral logic of the robot. Failing to explain its decision, the robot will never be trusted by the society. Our system solves that problem.”
“Even the best Explainable AI (XAI) solutions explain their inferences in terms of accuracy and some textual information. Such explanations usually do not provide means of measuring system accountability. Failing across areas, such as the medical sector and mining industry. For example, the mining activity in an area begins with mineral exploration and prospecting. AI and ML systems are now being used to determine which areas have the potential for further exploration. These systems use algorithms designed and trained to help in identifying rock faces and mineral-centric regions. Prospectors combine multiple data to get a comprehensive understanding of the area’s geological make-up to decide the most plausible location of high-grade ores. Using our framework, the ML systems would be able to explain their conclusions and results.”