Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from CIO Advisor APAC
Fujitsu Laboratories and Hokkaido University recently announced building a new technology based on the "explainable AI" concept.
FREMONT, CA: Fujitsu Laboratories Ltd. and Hokkaido University recently announced building a new technology based on the "explainable AI" concept, which automatically provides users with the appropriate steps to accomplish the desired results based on AI data results, such as medical check-ups.
'Explainable AI' reflects an area of growing interest in the field of machine learning and artificial intelligence. Even though AI technologies can make data decisions automatically, "explainable AI" also offers individual explanations for these decisions. It helps avoid the so-called "black box" situation in which AI concludes ambiguous and highly problematic means. While such approaches may also include hypothetical changes that can be made when an unacceptable result arises for individual products, these may not have any specific measures to enhance them.
For example, suppose an AI that makes decisions about the subject's health status decides that a person is unhealthy. In that case, the latest technology may be used first to clarify the reason for the outcome of health testing data such as height, weight, and blood pressure. The latest technology will also give personalized consumer advice about the best way to become healthy, recognize the connection between many complicated medical check-up items from past data, and explain appropriate measures for improving feasibility and implementation difficulties.
Therefore, the new technology provides the opportunity to increase the clarity and efficiency of AI decisions, enabling more people to engage with systems that use AI with a sense of confidence and peace of mind in the future. Further information will be presented at the opening of AAAI-21, the Thirty-Fifth AAAI Conference on Artificial Intelligence, on Tuesday, February 2.
Developmental Background
Today, deep learning is commonly utilized in AI systems that need advanced tasks like face recognition and automatic driving automatically to make different decisions depending on a vast amount of data using a kind of black-box predictive model. However, in the future, ensuring the transparency and accuracy of AI systems would become a significant issue for AI while making essential decisions and proposals for society. This requirement has gained huge interest and study in the technology of "explainable AI".