#### Re: Algorithm of differential evolution

Hello, the Smith, you wrote: the Main thing that I could not understand from Wikipedia - how to receive the following trial point. There it is written: "To a point x_i operator A who in a random way modifies an appropriate point is applied." What does operator A represent? It simply choice of a casual point from search span? It is supposed that A generates a casual point "close" to the leaking. How to understand proximity, depends on your task. For example, if to search for the decision of the task on arrangement of queens on board NxN the close point can be generated by swap of 2 columns (or lines) boards. For optimization of function of one variable it is possible to take something of type X n = X n-1 + k * rand (-1,1). Whether Should then the search span to change in due course, for example to decrease, doing algorithm more and more local, or it is necessary to search always in all area? It actually changes, but not at the expense of change A and because with temperature drop the transition probability in adjacent points with the worst value of optimizable function falls. If A it really simply choice of a casual point (what for then was to name this procedure by "operator"?) That what it is necessary to take allocation? Uniform, or here makes sense to experiment? It makes sense . For thanks code, only I language did not recognize is C# or Java? C#

#### Re: Algorithm of differential evolution

Implemented algorithm on pluses, very much does not please accuracy, optimized function of Rastrigina of two variables, took reference temperature 100., reduced under sedate law T _ {i + 1} = T_i * k where k took 0.9, 0.99, 0.999 and so on. At k = 0.99999 (5 nine) are very frequent answers nearby 1.0 at the correct minimum 0.0, also quite often fell out absolutely huge values - under 100. It I described behavior of algorithm in which the random searching neighborhood was narrowed down also under the sedate law (the maximum deviation from a current point I equal the epsilon, increased by the relation of current temperature to initial) took nobody. If this narrowing to remove, that is each time to generate a casual point in all search span stablly I receive the answer nearby 0.02, at cooling coefficient k = 0.99999, however in this situation the genetic algorithm sharply overtakes annealing on speed of operation (the genetics of my implementation found a minimum 0 almost instantly, thus, that annealing worked 2-3 seconds). Whether really annealing to adjust so that it overtook genetics? Struggle goes for a speed because on the real task the genetics on quality of the answer proved to be is comprehensible, but runtime , it would be desirable it to reduce... By the way, the second variant when  the point is generated in all search span, looks very strange. In it are actually casual process has no storage - its further flow is not influenced by earlier found points, as a result there can be such situation - we found near optimal value, then found certain absolutely nonoptimal, and suddenly because of probability in it passed, now the information on earlier found "good" value completely is lost, it seems to me it is very suspicious... Can in a method it is necessary at passage to not refining point to save former as the spare? Well and then, when the method fulfills, to produce the best point from earlier found, at least it looks very intelligently... Otherwise the algorithm completely ignores "former achievements", and  it is better nothing than a simple method of random searching. . At least so it seems to me, I am possible is absolutely wrong (it I understood from the description from Wikipedia, the floccus animation on Wikipedia behaves much more logically, there if  beats out from a point, close to an optimum the algorithm will aside be chatted a little, and again returns on an optimum, such impression that it nevertheless somehow considers the past but as - in  it is not described).