Impact: Explainable Artificial Intelligence (AI) and the Benefits to Industry

Wednesday 04th December 2024

Building on a successful PhD project that began in 2020 between the National Subsea Centre, the University of Stirling and the BT Group, as well as ongoing work as part of the Data for Net Zero (D4NZ) project, the Net Zero Operations team has been investigating and developing novel data-mining techniques for use in generating user-friendly explanations regarding commonly used AI methods in industrial optimisation. These methods, known as Evolutionary Algorithms (EAs), are considered “Black Box” approaches due to their intractable nature and use of stochastic processes, leading to their decisions often being too complex to interpret. It is this issue that this research aims to tackle.

In this article, our Net Zero Operations Research Fellow, Dr Martin Fyvie, provides an overview of the need for explainability methods for EAs, the scientific community’s response to this issue - Explainable Artificial Intelligence (XAI) – and the Net Zero Operations team’s work in this area.

Evolutionary Computing in Industry

A major driver in the adoption of population-based metaheuristics, or Evolutionary Algorithms, in industry, is their ability to solve complex optimisation problems where traditional methods may not be the best choice. These tend to be instances where the problem complexity or associated time constraints prevent a comprehensive search of all possible solutions, leading to the use of EAs being preferable. These algorithms function by mimicking biological processes such as reproduction, genetic mutation and natural selection to fine-tune a collection of solutions to a problem over time, leading to optimal (or near optimal) solutions being presented to the end-user. Popular examples include Genetic Algorithms and Swarm Algorithms. In recent years, population-based metaheuristics such as EAs have seen an increase in applications in areas such as transport and logistics, medical applications and engineering.

EAs have been used in transportation systems to optimise vehicle operations, such as airport taxi times, through fuel-efficient routes and reduced stops in air travel, improving both efficiency and environmental outcomes. They also excel in staff scheduling, balancing operational demands with employee preferences, highlighting their versatility. In healthcare, EAs support resource allocation and scheduling while ensuring transparency through explainable models. By identifying biases and errors, they can enhance fairness and reliability, making them ideal for high-stakes applications. In engineering and technology, EAs address large-scale optimisation challenges, from urban planning to big data management. Their adaptability enables tasks like improving housing stock and redesigning layouts, while Internet of Things (IoT) applications and mapping pedestrian traffic, help resolve issues such as congestion and noise pollution.

As their use in safety-critical industries continues to grow, trust in such AI systems becomes increasingly vital. To build this trust, it is essential to prioritise interpretability in system design, ensuring that both operations and results are transparent and understandable. When stakeholders can clearly comprehend how decisions are made, they are more likely to rely on these systems, particularly in high-stakes environments. This need is especially pressing for Black-Box models, which often require expert-level knowledge to interpret their decision-making processes. Addressing the challenges of non-interpretable systems is crucial to ensuring the responsible use of AI in handling sensitive data.

Explainable Artificial Intelligence

At the same time, the field of Explainable Artificial Intelligence (XAI) has also seen a considerable increase in attention. XAI aims to increase understanding of these complex processes and aid in building trust between the decision-makers and the systems. The growth in AI system usage can also be seen in the greater level of legislative scrutiny of their level of interpretability, especially when dealing with public data.

Common XAI techniques include feature attribution methods, such as Shapley Addictive exPlanationas (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), which provide insights into which input features most influenced a model's output, and surrogate models, which approximate complex systems with simpler, interpretable ones, to name just a few approaches. Despite the advancements in XAI, until recently much of the focus has traditionally been on supervised learning models, with less attention given to non-deterministic optimisation algorithms like EAs. Due to their stochastic nature and iterative processes, these algorithms present unique challenges for explainability. While performance metrics, such as convergence rates or solution quality, are well-studied, the underlying reasons for critical decisions during the search process remain opaque. Understanding the trajectory of these algorithms—the paths they take through the solution space—offers a promising avenue for generating explanations that our work has focused on.

Impact - Evolutionary Algorithm Trajectory Mining

To explore the potential of trajectory mining, Martin’s PhD project worked to create a library of experimental results on both benchmark and real-world inspired problems, collaborating with the BT Group to generate a series of problems mimicking a common staff rostering process. Through this, we identified that search trajectories in Evolutionary Algorithms contained geometrically significant features that could be directly linked to the original attributes of the problem. This finding enabled the development of a statistical analysis framework to assess the importance of these features. By pinpointing which features had the greatest influence on key decision points during the search process, we were able to highlight their significance in shaping outcomes. In real-world-inspired problems, these features often correspond to practical elements, such as working hour patterns or team configurations.

The experimental results confirmed that these features were detectable using decomposition-based techniques and custom metrics developed as part of this project. These methods enabled the team to create interpretable explanations relevant to the problem and the end-users' requirements, showing how specific features, workers, or patterns influenced solution quality. This interpretability allowed end-users to adjust EA-generated solutions to align better with non-measured KPIs or personal preferences, with minimal impact on overall quality.

By exploring solutions with explainability in mind, the team identified high-quality working hour allocations for a group of over 140 people, ensuring many received their preferred patterns as part of a one-time reallocation. By minimising the "range", a measure of instability in worker coverage, the team achieved consistent, minimally disruptive solutions, enabling project managers to confidently plan future tasks including installations months in advance.

Key explanations included:

Identifying high-impact rota sets to expand to more workers

Highlighting low-impact rota subsets with capacity for change to bridge schedule gaps

Determining rotas that enhance allocation consistency and those critical to meeting other KPIs

Beyond interpretability, the XAI methods developed in this project also hold value for algorithm design. By analysing feature importance and comparing how different algorithms explore the search space, the Net Zero Operations approach allows direct comparisons between EAs. This provides researchers and developers with insights into the key drivers of solution quality for various algorithms, highlighting potential areas for improvement. These insights can guide the refinement of algorithms, offering a clearer understanding of how their performance and decision-making mechanisms differ from other methods.

Outputs

Journal Papers:

Zhou, R., Bacardit, J., Brownlee, A.E., Cagnoni, S., Fyvie, M., Iacca, G., McCall, J., van Stein, N., Walker, D., & Hu, T. (2024). Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent SystemsIEEE Transactions on Evolutionary Computation. https://doi.org/10.1109/TEVC.2024.3476443

Fyvie, M., McCall, J. A. W., Christie, L. A., Brownlee, A. E. I., & Singh, M. (2023). Towards explainable metaheuristics: Feature extraction from trajectory mining. Expert Systems, e13494. https://doi.org/10.1111/exsy.13494

Conference Papers:

Fyvie, M., McCall, J.A.W., Christie, L.A., Zăvoianu, AC., Brownlee, A.E.I., Ainslie, R. (2023). Explaining a Staff Rostering Problem by Mining Trajectory Variance Structures.  Artificial Intelligence XL. SGAI 2023. Lecture Notes in Computer Science(), vol 14381. Springer, Cham. https://doi.org/10.1007/978-3-031-47994-6_27

Martin Fyvie, John Mccall, Lee Christie, and Alexander Brownlee. 2023. Explaining a Staff Rostering Genetic Algorithm using Sensitivity Analysis and Trajectory Analysis. In Proceedings of the Companion Conference on Genetic and Evolutionary Computation (GECCO '23 Companion). Association for Computing Machinery, New York, NY, USA, 1648–1656. https://doi.org/10.1145/3583133.3596353

Fyvie, M., McCall, J.A.W., Christie, L.A. (2021). Towards Explainable Meta-heuristics: PCA for Trajectory Mining in Evolutionary Algorithms. Artificial Intelligence XXXVIII. SGAI-AI 2021. Lecture Notes in Computer Science(), vol 13101. Springer, Cham. https://doi.org/10.1007/978-3-030-91100-3_7