Through the programme DARPA aims to establish a suite of machine learning techniques with the ability to produce explainable models without compromising on performance. This will allow human users to better understand and effectively manage their artificially intelligent counterparts.
Consequently, Raytheon BBN’s Explainable Question Answering System or EQUAS will enable AI programs to ‘show their work,’ increasing the confidence in the machine’s suggestions. “Our goal is to give the user enough information about how the machine’s answer was derived and show that the system considered relevant information so users feel comfortable acting on the system’s recommendation,” said Bill Ferguson, Lead Scientist and EQUAS Principal Investigator at Raytheon BBN.
EQUAS will rely which data mattered most in the AI decision-making process. Crucially, users will be able to explore these recommendations via an intuitive graphical interface. While this technology is still in the very early stages, the applications could be wide-ranging.
“A fully developed system like EQUAS could help with decision-making not only in DoD operations, but in a range of other applications like campus security, industrial operations and the medical field,” said Ferguson. “Say a doctor has an x-ray image of a lung and her AI system says that its cancer. She asks why and the system highlights what it thinks are suspicious shadows, which she had previously disregarded as artefacts of the X-ray process. Now the doctor can make the call – to diagnose, investigate further, or, if she still thinks the system is in error, to let it go.”
Remarkably, as the system is enhanced EQUAS will be able to monitor itself and share factors that limit its ability to make reliable recommendations. These self-monitoring capabilities will help developers refine AI systems, allowing them to inject additional data or change how data is processed.
If you would like to join our community and read more articles like this then please click here.