"Airplane Engine" by Pixabay from Pexels

Learning to Trust Black Box Artificial Intelligence

2/8/2021

As artificial intelligence (AI) gets more advanced, the scientific community continues to struggle with the level of trust to grant it. Having too much trust in an algorithm yet to be proven can result in problems with bias, personal damage, or flat-out incorrect results. In contrast, rigorously tested and monitored AI can be used for tremendous good, most often when its inner workings can be interpreted by humans. However, in the case of deeply complex algorithms, a different standard of trust than acceptance through understanding needs to be established.

AI comes in two main types – explainable and interpretable. Ideally, all AI would be interpretable, meaning that one can clearly track and understand the decisions the algorithm makes from input to output, such that no results are a mystery. Realistically, most are explainable at best, focusing more on making sense of the inner logic of the algorithm without a true explanation of each step. Since increasingly complex problems in the world require complex solutions, developers are relying on explainable, black box models, in which the inputs and outputs are clear, but the intermediate steps are unknown.

Rolls-Royce has been using analytics in the monitoring and maintenance of its airplane engines since 1980. This process has undergone quite a few improvements over the years, but in 2015 made a drastic shift to what the company calls “AI 2.0”. This new method looks at up to 26 dimensions of data, something that even the most intelligent and specialized humans would struggle to comprehend. Because of this, the method far surpasses the stage of being fully interpretable and becomes an unknown black box. Fortunately for Rolls-Royce, they can trust this AI because of proven, consistent results, but evidence of such trust was still lacking. Because of this disconnect, they are now using a series of checks and balances to establish ethical integrity within their black box algorithm. They employ the “Rolls-Royce 5 Checks Philosophy,” which includes a sense check, continuous test systems, independent check, comprehensive check, and data integrity check, to identify mutations or issues in their system. By checking against a massive series of known inputs and outputs, Rolls-Royce is able to quickly identify discrepancies, and therefore errors, within the AI. This amasses a great amount of trust and integrity within the AI, allowing the company to continue using such complex analytics with minimal fear of degradation or error.

Although this is a meaningful step forward, is this enough to put trust in similar applications? Rolls-Royce had a proven record of success in the airplane engine industry even before these measures were put into place. If a similarly complex algorithm were in use by a fledgling company, with dangerous consequences if it were to fail, would we accept a rigorous series of checks on a black box as enough? Unfortunately, there is no clear-cut answer, and it would certainly change from algorithm to algorithm. However, ensuring the ethical integrity of previously unknown systems is a great leap for AI ethics.

Clearly, it would be best to have an interpretable solution for every AI application such that it could be monitored and tested to the fullest extent. However, given current technologies, that is simply not possible. As we continue to use black box AI, creating checks and balances for ethical soundness is integral for improving trust in such systems. Moving forward, advancements in AI problem solving should be met with similar advancements in establishing trust and integrity, so that these groundbreaking systems continue to be safely used to their highest potential.