Introducing Sequence-based Hyper-heuristics with Multiple Points of Interpretation

Citation
Share
Date
Abstract
Hyper-heuristics are a type of search methods used for solving optimization problems. This field is relatively new and has caught the attention of researchers because it employs existing heuristics to construct solutions for specific problems. In other words, instead of inventing new technics, they combine already available technics to tackle optimization problems. There are two kinds of hyper-heuristic models in the literature: "rule-based", which rely on a set of rules that guide the solver to decide what heuristic to perform next, and "sequence-based", which rely on a sequence of heuristics to apply. One remarkable characteristic of sequence-based models is that they do not need to identify features that map the problem state but represent the actions to make at each decision step. Furthermore, current works have shown that the sequence length does not need to equal the number of required decisions to find a good solution. Instead, the sequence can be small and repeated under a looping schema to fulfill the required number of decisions. Although employing looping schemas seems to provide suitable solutions, they may be somewhat restrictive due to their fixed nature and other limitations. For instance, the current looping schemas require repeating all the elements in the sequence of actions, which could be very disruptive during the learning stage of the hyper-heuristic because any change of the sequence is a change in each of their repetitions. In this sense, a relaxation of the looping schemas could improve the performance of the models. To that end, this work presents two models that learn their looping schemas by interpreting their sequence of actions from different positions and approaches: the Bidirectional Point Of Interpretation (BPOI) model and the Partial Bidirectional Point Of Interpretation (PPOI) model. We found that the PPOI not only can produce reasonable solutions to solve problems but also keeps the length of the sequence small. Furthermore, we introduced the notion of a length penalization to keep the sequence small, which from experiments, also seems to improve the models' performances.