Proposed Approach

The texture synthesis problem consists of generating an image visually similar to the provided texture sample. In case of binary textures this similarity can be quantified in terms of frequencies of small pixel patches. When viewed this way, texture synthesis becomes an optimization problem. We can start with a random image and optimize it until its patch distribution matches thats of the sample texture. Ideally we'd want to use larger patches to better capture the structure. There are 2N2 possible N x N patches, which makes 4 x 4 the largest practical size. Moreover, the distributions get increasingly sparse and unreliable as the patch size increases. To ameliorate this issue, the function subject to minimization is defined as the sum of statistical distances over the distributions of patches from 1 x 1 to 4 x 4. Some normalization may be required to ensure that all terms contribute significantly. Defects in the form of patches never occuring in the sample are inevitable. Lower order statistics acts as regularization, making sure that such defects at least look plausible. I've also experimented with another approach: running the optimization in stages with increasing patch size. This approach is trickier: if simulated annealing is used for optimization, the starting temperature must be carefully selected to ensure that the results of previous stages are not completely destroyed.

The simplest optimization strategy is hillclimbing: modify the pixels and accept the changes that reduce the objective function. I went with simulated annealing - a more powerful option that produces better results at a higher computational cost. It works by accepting random changes with probability Exp((f(x) - f(x')) / T), where f is the objective function, x is the current state, x' is the new state and T is a variable called temperature that decreases over time. Varying the temperature allows the algorithm to gradually transition from a "hot" random search phase to the "cold" hillclimbing phase. Exponentially decaying schedule aka geometric cooling is a good choice. The initial value can be estimated by performing some iterations at T = ∞ and calculating the standard deviation of the objective function. The final value should be small enough so that all detrimental changes are rejected (something like 10-6).

There are several options of modifying the pixels during the optimization. The simplest one is inverting a single random pixel. Higher quality results can be obtained by randomly modifying pixel blocks. In technical terms, this increases the search neighborhood size and allows the algorithm to find better local optima. Larger modifications however require more optimization steps. The block size can be selected at random to combine speed and quality. Another way to improve the efficiency is to copy the new pixel values from a random sample location instead of setting them at random. In any case, the small number of pixels modified at each step allows to incrementally update the patch distributions and the objective function instead of calculating everything from scratch. Here are some results generated with different options. Numbers denote the possible modification sizes, "R" stands for random and "S" for sample.

Comparison of pixel modification methods Optimization dynamics at zero temperature, 128 x 128 output, curves averaged over multiple algorithm runs: Objective function plots

The choice of distribution distance measure is important. A large number of such measures exist. I've limited my testing to the following ones:

where a and b denote the patch distributions. Comparison of distribution distance measures Hellinger distance, JSD and triangular discrimination give the best results, likely because they are more sensitive to frequency deviations of rare patches. JSD is costly to calculate because of the logarithm though.

Regarding the number of iterations - the more the better. Here are some results generated with different number of iterations per pixel (IPP) and 4 x 4 sample-based modifications:

Comparison of number of iterations

Here is the algorithm in action. Exponential smoothing was applied to dampen the noise, so what you see is an average density of states.

Possible Extensions

A fun thing to try is to crossbreed different textures by providing two samples side by side:

Texture crossover example

Generating non-binary textures is possible in principle, but would require a more complex, slower implementation (hash tables instead of arrays for storing the distributions). Increasingly sparse distributions could also be a problem. The fact that the approach relies on small pixel blocks means that it cannot capture long-range dependencies and struggles with textures having large features or a regular structure. It may be possible to fix this deficiency by using the patch distributions at multiple scales. Instead of generating a new texture, the sample can be extended by limiting the pixel modifications to the area outside it. Image stylization is another extension requiring only a minor modification - an additional term for pixel-wise distance to the stylized image.

Implementation

Since simulated annealing algorithm is a part of my optimization framework, I only had to implement the problem definition. If you'd like to try generating some textures yourslef, you can either download the source code and compile the framework or download the precompiled binary from my Patreon page (patron only). The options are specified via Config.ini file: