Contains Sudoku probability

Test run: Solving Sudokus using combinatorial evolution

  • 14 minutes to read

November 2016

Volume 31, number 11

By James McCaffrey

In a combinatorial optimization problem, the goal is to arrange a set of discrete elements in a certain order. A sudoku puzzle is an example of a combinatorial optimization problem. In this article, I'll show you how to write a program that solves difficult Sudoku problems using a technique I call combinatorial evolution.

The demo sets up a “not easy” Sudoku problem (I'll explain briefly what I mean by “not easy”). There are several requirements for Sudoku. Each row of the 9 × 9 solution grid must contain the digits 1 to 9 without duplicates. Each column must contain the digits 1 to 9. Each 3 × 3 sub-square must contain the digits 1 through 9. And in the solution grid none of the initial values ​​in the problem grid can be changed.

illustration 1 shows the demo problem. Some Sudokus can be solved with a brute force algorithm, in which you examine every empty cell in the grid and check which values ​​can legally be placed there using the specifications for rows, columns and sub-squares being checked. If only one value is possible, this value is placed in the empty cell. The process continues until a value is assigned to all empty cells. This not easy type of Sudoku problem can be found mostly in newspapers and magazines.


Figure 1: Not an easy Sudoku problem

The demo problem isn't easy because the brute force algorithm doesn't work. For example, suppose you examine the problem grid from left to right, then top to bottom. The blank cell at (0, 0) can contain one of the following values: 1, 3, 5, 9. The blank cell at (0, 1) can contain one of the following values: 1, 3, 9. The next blank cell at (0, 4) can only contain 3, so place a 3 there and continue. However, with this approach, after placing nine values, you get stuck because any remaining cells can have two or more values.

To understand what combinatorial evolutionary optimization is, let's take a look at Figure 2 throw. Ideas from various bio-inspired algorithms are used for combinatorial evolution optimization. The algorithm works with a collection of virtual organisms. Each organism represents a possible solution to the problem. Combinatorial evolution is an iterative process. Each iteration is called an "epoch". In every epoch every organism tries to find a better solution by examining a new possible solution.


Figure 2: Solving a Sudoku using combinatorial evolution

After all organisms have had a chance to improve, two good organism solutions are selected to give birth to a new organism that will replace a weak solution. In this way, the population of organisms evolves over time. If no optimal solution has been found after the "maxEpochs" period, the algorithm is restarted, killing all organisms and creating a new population.

In the demo, 200 virtual organisms are set up with a time limit of 5,000 epochs. These values ​​were found through trial and error. Combinatorial evolution does not guarantee that an optimal solution will be found, which is why a limit of 20 restarts was established to avoid a possible infinite loop.

The demo program found an optimal solution after three restarts, which took about 8 seconds on my desktop computer. During the execution of the demo, the program showed an error measure of the best organism. I explain how the measure of error is defined and calculated when I introduce the appropriate code. However, you can see that the algorithm tends to get to a very good solution very quickly (error = 2) but then gets stuck. The restart process is one of the methods used to combat this feature and is common in many optimization algorithms.

The demo program

To create the demo program, I started Visual Studio, clicked File> New> Project, and then selected the C # Console Application option. I called the project "SudokuEvo". The demo program has no significant .NET dependencies, so any version of Visual Studio should work. After loading the template code, I right-clicked on “Program.cs” in the Solution Explorer window, renamed the file to “SudokuEvoProgram.cs” and allowed Visual Studio to automatically rename the “Program” class.

At the top of the editor window, I deleted all superfluous “using” statements except for the references to the “System” and “Collections.Generic” namespaces. The overall structure of the demo program (with a few minor changes) is in Figure 3 to see. The demo program is too long to be presented as a whole in this article. However, the full source code of the demo is included in the accompanying download.

Figure 3: Structure of the demo program

The demo program is not as complicated as it may first appear, since many of the methods are short helper methods. The demo has two classes. The entire code logic is implemented as static methods in the “Main” class. The "Organism" class defines one possible solution to the Sudoku objective problem.

Each “Organism” object has a type, which can be 0 for a “worker” organism or 1 for an “explorer” organism. The field named "matrix" is a string matrix of integers, like an array of arrays, that represents the possible solution. Every possible solution has a flaw. The error value 0 means that no specifications were violated, which is why the "matrix" field contains an optimal solution. Each “Organism” object has an “age” field to control whether the organism dies in each epoch.

In the demo program, the Sudoku problem is set up and displayed using these instructions:

The value 0 is used to indicate an empty cell. This manual, hard-coded approach is quite tedious, and most realistic combinatorial optimization problems read problem data from a text file.

The Sudoku problem is tackled using these instructions:

The "Solve" method is mainly a wrapper around the "SolveEvo" method, which does most of the work. This is a common design pattern in combinatorial optimization. A low-level solver method tries to find an optimal solution. This method is wrapped in a higher-level solver method that performs reboots.

The combinatorial evolution algorithm is not guaranteed to find an optimal solution (i.e. a solution without violating specifications). The demo program therefore checks whether the best solution found is optimal:

Matrix initialization and errors

In my opinion, the best way to understand combinatorial evolutionary optimization is to start by studying helper methods. Once these are understood, the "solve" method is relatively easy to grasp. Let me start by explaining the “RandomMatrix” method, which initializes the “matrix” field of an “Organism” object with a random possible solution. Somewhat surprising is that the “RandomMatrix” method is the trickiest part of the entire algorithm.

Figure 4 shows the definition of the "RandomMatrix" method.

Figure 4: Definition of the "RandomMatrix" method

The algorithm is designed so that each of the nine 3x3 sub-squares is okay at all times in the sense that each cell contains the digits 1 through 9 with no duplicate values. A condition that is always true is sometimes called an “invariant”. This invariant influences all other methods of the algorithm. In theory, the row or column default could have been used as an invariant, but in practice, using the sub-square default is more effective.

Conceptually, traversing every 3x3 sub-square in a 9x9 matrix doesn't go deep, but the implementation is, to put it mildly, tricky. My approach was to define two helper methods: "Block" and "Corner". The “Block” method accepts the row index “r” and the column index “c” and returns a block number (0-8) that contains the cell at (r, c). Block numbers are assigned from left to right and then top to bottom. Example: if (r, c) = (3, 8), then the “Block” method returns 5.

The Corner method accepts a block ID (0-8) and returns the indices of the upper left corner of the block. Example: if block = 8, the method returns "Corner" (6, 6).

Once it is known that all nine 3x3 sub-squares of a matrix are okay, it is possible to determine a relatively simple method that defines errors as follows:

The total number of errors for a possible solution is the sum of the number of missing values ​​in the rows plus the number of missing values ​​in the columns. Due to the algorithm invariant, no values ​​are missing in any of the 3x3 sub-squares, so that they do not contribute to the error. Note that counting the number of duplicate values ​​in each row and column is the same as counting the number of missing values.

Generate an adjacent matrix

In combinatorial evolution there are two types of "organism" objects. Those with the type “explorer” look for possible solutions at random using the method “RandomMatrix”. “Organism” objects with the type “worker” repeatedly try to find a better solution than the one stored in their “matrix” field. For this purpose, a “neighboring” possible solution is examined.

Due to the invariant of the 3x3 sub-squares, a neighboring solution must be limited to a permutation of a sub-square. In concrete terms, this means that the algorithm for determining an adjacent matrix selects a block at random, then selects two cells in the block (where no cell contains a fixed value from the problem definition) and swaps the values ​​in the two cells.

The "NeighborMatrix" method is defined as follows:

The demo program assumes the existence of a "Random" object at class level called "rnd". This concept is common to many optimization algorithms. The idea of ​​generating a neighboring solution can be found in various combinatorial optimization algorithms, e.g. B. with the simulated cooling or the simulated bee colony optimization.

Merge two matrices

The algorithm for combinatorial evolution optimization implements a form of virtual evolution by selecting two good “Organism” objects (whose “matrix” field has a small number of errors) and then using these good objects to create a new child object. This new, presumably very good child organism replaces a weak organism.

The “MergeMatrices” method accepts two 9x9 matrices from two “Organism” objects. This method searches blocks 0 to 8. For each block a random value in the range 0.0 to 1.0 is generated. If the random value is less than 0.50 (i.e. about half of the cases), the values ​​in the two blocks are exchanged. The code is as follows:

The evolutionary mechanism is roughly comparable to the mechanism for recombining chromosomes, which is used in genetic algorithms.

The "SolveEvo" method

The primary algorithm is implemented using the “SolveEvo” method. The method is best described using a combination of code and general pseudocode (see Figure 5).

Figure 5: Primary algorithm implemented in the “SolveEvo” method

The method starts by determining the number of “Organism” objects of type “worker” (90% of the total number is used). This value was determined through systematic trial and error. The “Organism” objects are stored in an array called “hive”.

The pseudocode for processing an "Organism" object of type "worker" is as follows:

The algorithm occasionally (probability = 0.001) instructs an “Organism” object to accept a neighboring solution that is worse than the object's current solution. This is a common optimization strategy to help an algorithm break free from a suboptimal solution.

In each epoch in the iteration loop, each “Organism” object of the “explorer” type generates a random solution grid. After all "Organism" objects have been processed, a new "Organism" object is created. In pseudocode it looks like this:

As it turns out, this evolutionary mechanism is critical to the success of the algorithm. Without evolution, the algorithm can sometimes fail. But with Evolution, the algorithm solved all of the difficult Sudoku problems I presented to it, including some damn difficult problems I found on the internet. However, combinatorial optimization has not yet been subjected to research analysis, so the fact that combinatorial evolution is useful for solving Sudokus is no guarantee that it can be used to solve combinatorial optimization problems.

Summary

The combinatorial evolution algorithm that I presented in this article is not actually an algorithm. Because combinatorial evolution is a metaheuristic. By this I mean that combinatorial evolution provides only general guidelines that can be followed in designing a specific algorithm for solving a particular optimization problem.

You are unlikely to have to solve Sudoku problems in your normal work environment. But combinatorial optimization can also be used to solve real problems. The main ideas for combinatorial optimization are defining a data structure that describes the target problem, what is meant by a random solution, what is a neighboring solution, and what is a measure of error. When these pieces of the puzzle are in the right place, many problems that traditional algorithms cannot solve can be solved quickly and efficiently using combinatorial evolution.


Dr. James McCaffreyworks for Microsoft Research in Redmond, Washington. He has worked on various Microsoft products, including Internet Explorer and Bing. Dr. You can reach McCaffrey at [email protected]

Thanks to the following Microsoft technical experts for reviewing this article: Chris Lee and Kirk Olynyk