1

Topic: Algorithm of differential evolution

Implemented algorithm of optimization by a method of differential evolution, I try to debug it. On simple functions, type x*x+1, it fulfilled accurately and beautifully, decided to try more difficult, took function of Rozenbroka, tried on it., Basically, my program produced the right answer, but there are doubts. I hoped that the program will be fast enough and exact enough, but at following parameters: the size of population 1000, 500 steps of evolution, F = 0.8, probability of a mutation 0.6, optimal value it turned out nearby 0.001, at the right answer 0. It seems to me, at such huge population and such great number of steps (the program operating time by the way though and not a floor of day, but second 2-3 was) accuracy could be and it is better. There is a question - such results for the genetic algorithm are how much adequate at such parameters? It can is simple function of Rozenbroka such rigid, what so is hardly optimized, or I  in the program (for elimination last it is all and was started)? Also still to steam of questions in general. I only began acquaintance to genetic algorithms as there was a necessity to optimize function at which there is a casual component that does in it a large quantity of local extrema because of what it is not obviously possible to optimize its gradient methods. From Google I understood that quality of operation of the genetic algorithm strongly depends on the correct selection of its parameters, well and, of course, from the correct choice of specific algorithm. Prompt, how it is possible to adjust algorithm parameters? I understand that here there is no ready recipe (it is necessary to know a specific target, it basically not a secret, but its description will be great enough, can describe nevertheless in a spoiler if it will be necessary later). Somebody can prompts any reasons which help me to explain this point in question. It would be desirable to receive comprehensible result, and thus to spare in time as this optimizer will be fastened to real-time system, and by the speed very serious requirements. Can be eat any good articles, books where recommendations about selection of parameters are made. Can be eat generally possibility to select them by means of methods of machine training. I will be glad to any sentences, all is put to use PS. Selecting algorithm for the task, I also read about algorithm of simulation of annealing, probably it was here better, but I and could not understand with it if who throws off the link where it it it is normally described (instead of as in Wikipedia), I will be very grateful.

2

Re: Algorithm of differential evolution

3

Re: Algorithm of differential evolution

4

Re: Algorithm of differential evolution

Hello, the Smith, you wrote: the Main thing that I could not understand from Wikipedia - how to receive the following trial point. There it is written: "To a point x_i operator A who in a random way modifies an appropriate point is applied." What does operator A represent? It simply choice of a casual point from search span? It is supposed that A generates a casual point "close" to the leaking. How to understand proximity, depends on your task. For example, if to search for the decision of the task on arrangement of queens on board NxN the close point can be generated by swap of 2 columns (or lines) boards. For optimization of function of one variable it is possible to take something of type X n = X n-1 + k * rand (-1,1). Whether Should then the search span to change in due course, for example to decrease, doing algorithm more and more local, or it is necessary to search always in all area? It actually changes, but not at the expense of change A and because with temperature drop the transition probability in adjacent points with the worst value of optimizable function falls. If A it really simply choice of a casual point (what for then was to name this procedure by "operator"?) That what it is necessary to take allocation? Uniform, or here makes sense to experiment? It makes sense . For thanks code, only I language did not recognize is C# or Java? C#

5

Re: Algorithm of differential evolution

Implemented algorithm on pluses, very much does not please accuracy, optimized function of Rastrigina of two variables, took reference temperature 100., reduced under sedate law T _ {i + 1} = T_i * k where k took 0.9, 0.99, 0.999 and so on. At k = 0.99999 (5 nine) are very frequent answers nearby 1.0 at the correct minimum 0.0, also quite often fell out absolutely huge values - under 100. It I described behavior of algorithm in which the random searching neighborhood was narrowed down also under the sedate law (the maximum deviation from a current point I equal the epsilon, increased by the relation of current temperature to initial) took nobody. If this narrowing to remove, that is each time to generate a casual point in all search span stablly I receive the answer nearby 0.02, at cooling coefficient k = 0.99999, however in this situation the genetic algorithm sharply overtakes annealing on speed of operation (the genetics of my implementation found a minimum 0 almost instantly, thus, that annealing worked 2-3 seconds). Whether really annealing to adjust so that it overtook genetics? Struggle goes for a speed because on the real task the genetics on quality of the answer proved to be is comprehensible, but runtime , it would be desirable it to reduce... By the way, the second variant when  the point is generated in all search span, looks very strange. In it are actually casual process has no storage - its further flow is not influenced by earlier found points, as a result there can be such situation - we found near optimal value, then found certain absolutely nonoptimal, and suddenly because of probability in it passed, now the information on earlier found "good" value completely is lost, it seems to me it is very suspicious... Can in a method it is necessary at passage to not refining point to save former as the spare? Well and then, when the method fulfills, to produce the best point from earlier found, at least it looks very intelligently... Otherwise the algorithm completely ignores "former achievements", and  it is better nothing than a simple method of random searching. . At least so it seems to me, I am possible is absolutely wrong (it I understood from the description from Wikipedia, the floccus animation on Wikipedia behaves much more logically, there if  beats out from a point, close to an optimum the algorithm will aside be chatted a little, and again returns on an optimum, such impression that it nevertheless somehow considers the past but as - in  it is not described).

6

Re: Algorithm of differential evolution

Hello, the Smith, you wrote: Implemented algorithm on pluses, very much does not please accuracy, optimized function of Rastrigina of two variables, took reference temperature 100., reduced under sedate law T _ {i + 1} = T_i * k where k took 0.9, 0.99, 0.999 and so on. At k = 0.99999 (5 nine) are very frequent answers nearby 1.0 at the correct minimum 0.0, also quite often fell out absolutely huge values - under 100. At me the best results, as a rule, turned out for sedate reduction by "steps", that is, temperature lowering happens not each iteration, and everyone K iterations. But, it were tasks of the discrete optimization. It I described behavior of algorithm in which the random searching neighborhood was narrowed down also under the sedate law (the maximum deviation from a current point I equal the epsilon, increased by the relation of current temperature to initial) took nobody. If this narrowing to remove, that is each time to generate a casual point in all search span stablly I receive the answer nearby 0.02, at cooling coefficient k = 0.99999, however in this situation the genetic algorithm sharply overtakes annealing on speed of operation (the genetics of my implementation found a minimum 0 almost instantly, thus, that annealing worked 2-3 seconds). Besides, I had not to change algorithm of generation of "neighborhood" depending on temperature. In your case it is possible to try any variant from here: whether http://td.chem.msu.ru/uploads/files/cou … ture13.pdf Really annealing to adjust so that it overtook genetics? Struggle goes for a speed because on the real task the genetics on quality of the answer proved to be is comprehensible, but runtime , it would be desirable it to reduce... Here I can not tell anything for did not compare. Plus, precisely is tasks for which annealing works badly (the task of the direct-sales representative, for example). By the way, the second variant when  the point is generated in all search span, looks very strange. In it are actually casual process has no storage - its further flow is not influenced by earlier found points, as a result there can be such situation - we found near optimal value, then found certain absolutely nonoptimal, and suddenly because of probability in it passed, now the information on earlier found "good" value completely is lost, it seems to me it is very suspicious... It is normal. Can in a method it is necessary at passage to not refining point to save former as the spare? Well and then, when the method fulfills, to produce the best point from earlier found, at least it looks very intelligently... So. It is necessary to save the best found decision (it is possible even M the best). Then it is possible to run repeatedly algorithm, starting from the best point, with other parameters (smaller start temperature and its slower falling, for example). Otherwise the algorithm completely ignores "former achievements", and  it is better nothing than a simple method of random searching... At least so it seems to me, I am possible is absolutely wrong (it I understood from the description from Wikipedia, the floccus animation on Wikipedia behaves much more logically, there if  beats out from a point, close to an optimum the algorithm will aside be chatted a little, and again returns on an optimum, such impression that it nevertheless somehow considers the past but as - in  it is not described). Within one running does not consider.