Please use this identifier to cite or link to this item:
Appears in Collections:Computing Science and Mathematics Conference Papers and Proceedings
Author(s): Singh, Manjinder
Brownlee, Alexander E I
Cairns, David
Contact Email:
Title: Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for Importance of Variables
Citation: Singh M, Brownlee AEI & Cairns D (2022) Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for Importance of Variables. In: GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion. GECCO '22:, Boston, USA, 09.07.2022-13.07.2022. New York: ACM, pp. 1785-1793.
Issue Date: 2022
Date Deposited: 29-Apr-2022
Conference Name: GECCO '22:
Conference Dates: 2022-07-09 - 2022-07-13
Conference Location: Boston, USA
Abstract: Metaheuristic search algorithms look for solutions that either max-imise or minimise a set of objectives, such as cost or performance. However most real-world optimisation problems consist of nonlin-ear problems with complex constraints and conflicting objectives. The process by which a GA arrives at a solution remains largely unexplained to the end-user. A poorly understood solution will dent the confidence a user has in the arrived at solution. We propose that investigation of the variables that strongly influence solution quality and their relationship would be a step toward providing an explanation of the near-optimal solution presented by a meta-heuristic. Through the use of four benchmark problems we use the population data generated by a Genetic Algorithm (GA) to train a surrogate model, and investigate the learning of the search space by the surro-gate model. We compare what the surrogate has learned after being trained on population data generated after the first generation and contrast this with a surrogate model trained on the population data from all generations. We show that the surrogate model picks out key characteristics of the problem as it is trained on population data from each generation. Through mining the surrogate model we can build a picture of the learning process of a GA, and thus an explanation of the solution presented by the GA. The aim being to build trust and confidence in the end-user about the solution presented by the GA, and encourage adoption of the model. CCS CONCEPTS • Theory of computation → Models of learning; Theory of randomized search heuristics.
Status: AM - Accepted Manuscript
Rights: © ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion, July 2022, Pages 1785–1793,

Files in This Item:
File Description SizeFormat 
Singh-etal-ACM-2022.pdfFulltext - Accepted Version1.35 MBAdobe PDFView/Open

This item is protected by original copyright

Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.

The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved

If you believe that any material held in STORRE infringes copyright, please contact providing details and we will remove the Work from public display in STORRE and investigate your claim.