﻿ Local Search ﻿
﻿

### Introduction

The local search algorithm works by iteratively selecting an improved solution from the current solution's neighborhood until no improvement can be found. The precise definition of the neighborhood relation depends on the problem. Usually two solutions are considered neighbours if the difference between them is small in some sense (e. g. small number of different elements). Many alternative neighborhood definitions are often possible. It is important to select the one appropriate for the problem, since this choice has a major impact on the quality of the results. There are two general recommendations:

• Any solution should be reachable from any other solution
• The score difference between the neighbours should be small

The neigborhood size is typically polynomial in the problem size. In practice this makes the algorithm runtime polynomial too, but with a higher exponent that also depends on the mean number of iterations. Larger neighborhood sizes make the search more powerful, but may quickly make it prohibitively small. Variable neighborhood search is an acceleration technique that works by starting with a small neighborhood and switching to a larger one only when no improvements are found.

### Search Strategies

There are two basic strategies of selecting a new solution from a neighborhood:

• First improvement: evaluate the neighbours until an improvement has been found
• Best improvement: evaluate all the neighbours, select the best one

Best improvement strategy usually gives better results but is much slower. Ascension features a third strategy called chain local search designed to combine high efficiency and solution quality. Instead of storing the neighbours of a solutions explicitly, Ascension relies on move lists. A move specifies the changes that must be applied to a solution to transform it into one of its neighbours. The new strategy first selects the candidate moves that improve the score of the current solution and then tries to apply them sequentially in order of decreasing improvement. Since the solution changes during this process, some moves may no longer result in improvement, in which case they are undone. This strategy is slightly slower than the first improvement, but produces better results. However, it is not as universal, since applying a move intended for a certain solution to a different one may not make sense for some problems.

### Theoretical results

To understand the behaviour of each strategy, let's consider two toy problems:

1. Sum of bits:

• Solution: binary string xi of length N
• Score: Σ xi
• Moves: Single bit inversions: xi = 1 - xi
• Initialization: N/2 bits set to 0, others to 1

Best improvement: N / 2 iterations are necessary, each one evaluating N moves for a total of N2 evaluations.

First improvement: suppose that there are M good moves at some iteration. The probability to find a good move using i random evaluations is
pi = M / (N - i + 1) Πj = 0 (1 - M / (N - j))
The mean number of evaluations per iteration is L = (1 + N) / (1 + M). The total number of evaluations is
NFE = Σ L ≈ ∫ (1 + N) / (1 + N / 2 - t + 1) dt = (N + 1) Ln(N / 4 + 1 / 2) ≈ N Ln(N / 4)

Chain local search: only one iteration is necessary. After evaluating N moves, N / 2 of them are performed for a total of 3 N / 2 evaluations.

2. Identity permutation:

• Solution: permutation of integers from 1 to N
• Score: distance to an identity permutation, Σ [xi = i]
• Moves: element swaps, xi ⇔ xj
• Initialization: random permutation

The total number of moves is N (N - 1) / 2. Each element xi ≠ i can be placed in its correct position by swapping it with xxi, so the number of good moves is equal to the score. The mean distance of a random solution to the optimal one is N - 1.

Best improvement: N - 1 iterations are necessary, each one evaluating N (N - 1) / 2 moves for a total of approximately N3 / 2 evaluations.

First improvement: using the expression for the mean number of evaluations until improvement, we obtain
NFE = Σ L ≈ ∫ (1 + N (N - 1) / 2) / (N - t + 1) dt = (1 + N (N - 1) / 2) Ln(N / 2) ≈ N2 Ln(N / 2) / 2

Chain local search: move interaction must be accounted for. Performing a move that places an element xi in its correct position will prevent a move that would've placed the element xi was exchanged with in its correct position. Since performing each move invalidates another move from the candidate list, only half of the candidate moves will be applied. The number of iterations is therefore 1 + Log2 N and the total number of evaluations is
(1 + Log2 N) (N (N - 1) / 2) + N - 1 ≈ (1 + Log2 N) N2 / 2

Sum of bitsIdentity permutation
Search strategy NFE Iterations Moves generated NFE Iterations Moves generated
Best improvement N2 / 2 N / 2 N2 N3 / 2 N N3 / 2
First improvement N Ln(N / 4) N / 2 N2 / 2 N2 Ln(N / 2) / 2 N N3 / 2
Chain local search 3 N / 2 1 N (1 + Log2 N) N2 / 2 1 + Log2 N (1 + Log2 N) N2 / 2

The best improvement strategy is by far the slowest. For the bit sum problem chain local search requires the least number of evaluations, while for the identity permutation problem it requires approximately 1 / Ln(2) = 1.44 times more evaluations than the first improvement strategy. Chaining has the benefit of reduced overhead associated with move generation thanks to the low number of iterations. In the two toy problems, performing a move and evaluating the resulting solution takes constant time, so move generation has an impact comparable to or greater than the move evaluation, making chain local search the fastest overall. Despite originating from oversimplified problems, these results (related to the latter problem in particular) are roughly indicative of the actual performance. Tests on real problems show that chain local search usually requires more (by a factor up to 2x) evaluations than the first improvement strategy, but produces better results.

• TMove
• TMoveUndo

### Routines

```// Make a MoveList for the Local Search. Level determines the neighborhood size,
// with 1 being the smallest.
procedure MakeLSMoveList(
var   MoveList :  TMoveList;
const Solution :  TSolution;
Level    :  Integer);

// Apply a Move to the Solution
procedure PerformMove(
var   Solution :  TSolution;
const Move     :  TMove);
overload;

// Apply a Move to the Solution, save Undo
procedure PerformMove(
var   Solution :  TSolution;
var   Undo     :  TMoveUndo;
const Move     :  TMove);
overload;

// Undo the last move applied to the Solution
procedure UndoMove(
var   Solution :  TSolution;
const Undo     :  TMoveUndo);
```

Follow me on Twitter to receive notifications about new blog posts. This post was made possible thanks to the patron support. If you appreciate the effort put into creating this page and would like to see more posts like this, you can support me on Patreon. Top patrons of the month:

• Brian Bucklew
• Anton Shepelev
• Adam Hill
• Tomoyuki Naito
• John Metcalf