<![CDATA[Programmer's Town - Algorithms]]>
http://www.progtown.com/
Tue, 12 Dec 2017 00:04:00 +0000PunBB<![CDATA[Algorithm of cutting off of a polygon a rectangle]]>
http://www.progtown.com/topic2079106-algorithm-of-cutting-off-of-a-polygon-a-rectangle.html
Good afternoon the algorithm of cutting off of the arbitrary polygon, a rectangle Interests, I used Sutherland's algorithm - Hodgmana, but the given algorithm has a lack of that that cannot break a polygon on some, and adds superfluous edges. Looking on the Internet, found family of algorithms which essence approximately in the following: Sequentially edges are processed and there are crosspoints with rectangle Pin - "an input" point in a rectangle, Pout - exit point and internal points, after handling of all points, there is handling Pin/Pout, begin with Pin we go on internal points to Pout all it we add in an output circuit, further from Pout goes according to a polygon direction (on hour/or against an arrow) on a cutting rectangle if we come in Pin from which came to Pout we short a circuit, then we pass to the following point Pin. And a principle algorithm not difficult and beautiful, but problems appeared on polygons with self-intersection as there is no the unambiguous direction of bypass.]]>Tue, 12 Dec 2017 00:04:00 +0000http://www.progtown.com/topic2079106-algorithm-of-cutting-off-of-a-polygon-a-rectangle.html<![CDATA[Application of detectors of objects]]>
http://www.progtown.com/topic2079107-application-of-detectors-of-objects.html
Advise what to esteem, please, on any subtleties \cunnings of application of detectors of objects on images, from the point of view of model usage as black box. Model training, unconditionally, difficult, but besides it it is necessary to create an infrastructure of its application and post-handling of its results. For example, there is a qualifier which is caused in a sliding window and can work for one object in two adjacent windows. If I have to count finally an amount of objects it is necessary to unite somehow these adjacent actuatings, and here it is possible to offer different methods. But it is similar to a widespread problem and for certain for it there are standard decisions. The question was not specific about this task, and is faster about, whether there are sites \blogs \books etc., to esteem about the stored practical experience of usage ml (especially with images) with all accompanying complexities?]]>Fri, 08 Dec 2017 05:27:00 +0000http://www.progtown.com/topic2079107-application-of-detectors-of-objects.html<![CDATA[It is a little about ScanDisk]]>
http://www.progtown.com/topic2079108-it-is-a-little-about-scandisk.html
At me unsuccessful experience of usage of the program with which help it is possible to rectify errors in hard disk structure. It wanted to check up the information carrier, and to look for the "spoiled" units, and "the lost" clusters. As it appeared in it still there is such function, to "correct" files. I will be short. Departed in tar-container-ry a little beforehand created by archive and a heap on trifles. Interests, whether it is possible to roll away changes made the program, without system rollback on test point as my files doc demand repair... Than and how it is better to turn and implement reading, and rollback can also?]]>Sun, 03 Dec 2017 20:55:00 +0000http://www.progtown.com/topic2079108-it-is-a-little-about-scandisk.html<![CDATA[In what an essence ?]]>
http://www.progtown.com/topic2079109-in-what-an-essence.html
1. Than it in essence differs from the Database, from algorithm. At the heart of its operation any other principles lie? 2. What means "to train "? 3. In what advantage before algorithm producing data array statistic analysis consists?]]>Mon, 27 Nov 2017 16:01:00 +0000http://www.progtown.com/topic2079109-in-what-an-essence.html<![CDATA[Mixing of the data]]>
http://www.progtown.com/topic2079110-mixing-of-the-data.html
All greetings! There are some dial-ups of the heterogeneous data for training there is nobody the qualifier. For example, the qualifier of persons also is some bases of persons with different characteristics of photos (lighting, noise, photo resolution....) . Also there are some test bases on which quality of the qualifier (their characteristic is checked can differ from training). Let training procedure is fixed: casually selected given amount of the data from one specific basis undertakes (for example 10000 photos), are trained the qualifier on the given algorithm with the fixed parameters. After that it is tested on each of test bases, the dial-up of values of accuracy turns out. Then the same becomes for other learning basis, for thirds etc., bases big, on 10000 photos in everyone is. It is obvious that depending on character of learning basis of value of accuracy on test will walk somehow - somewhere to be refined, somewhere to worsen. Attention, a question: how to select proportions in which it is necessary to mix the learning data from different bases that on the received 10000 photos the qualifier it was trained in the best way (in some sense "averaged" on all test cases)? I.e. How to use the information from accuracy of each of the trained qualifiers on all tests for optimal selection of composition of a learning COMPOUND. I suspect that the question is researched. Stick into links or tell.]]>Fri, 24 Nov 2017 06:31:00 +0000http://www.progtown.com/topic2079110-mixing-of-the-data.html<![CDATA[The picture analysis]]>
http://www.progtown.com/topic2079111-the-picture-analysis.html
There are 2 here such pictures On the second the accurate "strip" structure which is not present on the first is tracked. The question - what algorithm can be applied to find these bands?]]>Fri, 24 Nov 2017 05:49:00 +0000http://www.progtown.com/topic2079111-the-picture-analysis.html<![CDATA[Re: the Finding of the second, the third result]]>
http://www.progtown.com/topic2077646-re-the-finding-of-the-second-the-third-result.html
Hello, Helkar, you wrote: the Task: There is an array of arrays of integer numbers. It is necessary to find n minimum totals selecting on one element from each array. The n-minimum totals should be a miscellaneous? We take also everyone on a minimum element and it is added. I invented algorithm, but such sensation that owed a life someone to me. I ask the help in finding of suitable algorithm. And where that invented the description of that?]]>Wed, 15 Nov 2017 03:20:00 +0000http://www.progtown.com/topic2077646-re-the-finding-of-the-second-the-third-result.html<![CDATA[Handling of multi-signs at machine training]]>
http://www.progtown.com/topic2077647-handling-of-multisigns-at-machine-training.html
If each object has as values of a certain sign - an array of the categories, what method of reduction of dimensionality to select? Normally select one-three and values id cat_1 cat_2 but remaining thus or are lost. To inflate dimensionality too it would not be desirable, as them can be 20 and can be 0.]]>Sat, 11 Nov 2017 15:48:00 +0000http://www.progtown.com/topic2077647-handling-of-multisigns-at-machine-training.html<![CDATA[Point hit in a polygon, including edges]]>
http://www.progtown.com/topic2077648-point-hit-in-a-polygon-including-edges.html
What algorithm to use for check of hit of a point in a polygon that points precisely on edges would be considered as a polygon part? Standard algorithms, namely count of an amount of intersections of a ray with edges and count of turns round a point (too it is reduced to search of edges intersecting a ray) do not allow to include "simply" an edge in a polygon, by default a part of edges gets to it, and the part - is not present. It is possible to register separately a case of hit of a point precisely on an edge that I and made, but it looks ugly. More beautiful looks beforehand to break into convex polygons, and then to check up on hit in everyone. There very simply to adjust where to carry points on edges. But not so it would be desirable to do preliminary handling of a polygon. Can be eat any more beautiful variants?]]>Wed, 08 Nov 2017 09:27:00 +0000http://www.progtown.com/topic2077648-point-hit-in-a-polygon-including-edges.html<![CDATA[Books on machine learning]]>
http://www.progtown.com/topic2076547-books-on-machine-learning.html
Advise good introduction in machine learning? Without in depth, is simple to have general idea.... <<RSDN@Home 1.0.0 alpha 5 rev. 0>>]]>Wed, 01 Nov 2017 03:43:00 +0000http://www.progtown.com/topic2076547-books-on-machine-learning.html<![CDATA[bw-tree - As at division we transfer keys to the new?]]>
http://www.progtown.com/topic2076548-bwtree-as-at-division-we-transfer-keys-to-the-new.html
Bw-tree Is such piece: it is invented in microsoft in 2013. https://www.microsoft.com/en-us/researc … ree-for... The Question about division . Is P which shares. At division P, we create new Q, there we merge from P all keys> = kkk (a dividing key). Q it is generated and there comes time moment (1). Further we climb in P to add delta-redirect-kkk-to-Q. It happens at the moment of time (2). Between the moments (1) and (2) other flow was in time in P keys> = kkk. After (2) there is a situation that for> = kkk all walk in Q, and the part of keys> = kkk there is not present. How to be? By search to check both? Still a variant: at setting delta-redirect-kkk-to-Q in P to put this delta through CAS it is not simple concerning current state P, and concerning that state P which was at the moment of the beginning of copying of the data in Q. Then if between (1) and (2) who-from changes P, the delta will not be interposed also we again we launch formation Q anew. But here risk: If actively flow INSERT in P state P will be constant new and the cycle of attempts of formation Q will do iterations. Dug out pair of projects students with attempts it bw-tree, I esteem source codes even more in detail though they have stubs to addition TODO.]]>Mon, 23 Oct 2017 13:30:00 +0000http://www.progtown.com/topic2076548-bwtree-as-at-division-we-transfer-keys-to-the-new.html<![CDATA[Repeated fast call of the console application]]>
http://www.progtown.com/topic2072366-repeated-fast-call-of-the-console-application.html
Probably not here, but nevertheless: It is necessary to launch repeatedly from the program console exe (15) for the purpose of search of command line options and obtaining of results in generated a type of files. But as it appeared, even all very slowly happens to usage RAM of a disk, at the best 10 starts in a second. it is not possible. To interpose caused exe into the, too, as at me not standard , and labview environment. Is only while idea to make my piece of search in caused exe, but there it is necessary to inject and difficult algorithm to translate on much. Advise how to accelerate an applications launch in windows 10? Can at me a hand curves and I something did not watch?]]>Thu, 12 Oct 2017 12:45:00 +0000http://www.progtown.com/topic2072366-repeated-fast-call-of-the-console-application.html<![CDATA[Algorithm of generation of one-dimensional patterns]]>
http://www.progtown.com/topic2072367-algorithm-of-generation-of-onedimensional-patterns.html
Colleagues, and whether exist methods for the decision of the following task. There is some area of space. It is broken into cells (the nearest analogy the image broken into pixels). There is some curve f (x, y) = (x (t), y (t)) if it transits through a cell the cell is considered painted over. It is necessary to pick up such curve that it painted over the maximum number of cells but satisfied to following conditions 1. Absence of self-intersections. The curve should not transit through the same cell twice 2. Sufficient smoothness (max_t (dx (t)/dt) ^2 + (dy (t)/dy) ^2 <T) 3. Uniqueness of a pattern - in any area in the size MxN the pattern should be unique. (I.e. if to linearize this area and to transform into binary number) that numbers on all image will not repeat.]]>Wed, 11 Oct 2017 03:51:00 +0000http://www.progtown.com/topic2072367-algorithm-of-generation-of-onedimensional-patterns.html<![CDATA[The effective decision big incompatible Slough]]>
http://www.progtown.com/topic2072368-the-effective-decision-big-incompatible-slough.html
Kind time of days! There is certainly incompatible Slough (the order of 10000 variables and 40000 equations) for which it is necessary to search for the decision in sense of minimization of a discrepancy. System strongly rarefied - in each equation 1 or 2 variable, and remaining coefficients certainly zero. At a model development cycle in matlab for the decision of this task function mldivide which fulfilled less than for 0,1 was used. At a model rewriting on With ++ with Intel MKL usage the given code location began to be fulfilled ~10-50 seconds how I implemented it. If through direct usage QR of the solver (LAPACKE_dgels) on the dense matrix in the size 10*40 elements it turned out seconds 50 and a lot of storage on storage of all matrix (much more than consumed matlab). If through conversion to a type (A ^ {T} A) x = (A ^ {T} b) and search of the decision of this joint system (with the dense square matrix 10*10) with the help mkl_dcsrmv, mkl_dcsrmultd for conversion and LAPACKE_dgesv it is immediate for the decision I managed to receive time of the order of 10 seconds for the decision. It turns out that the difference in time with matlab - a minimum 2 orders that somehow is too much. Moreover, matlab in the virtual machine on 4-hletnem CPU, and Intel MKL on new server Xeon E5 v4 generally was launched. Prompt, please, that I can do not so? Can be eat the implemented methods of the fast decision of such strongly rarefied incompatible Sloughs about which I do not know and about which it is not told in the function description mldivide (where generally it is said, what for my case is used QR the solver)?]]>Tue, 10 Oct 2017 15:16:00 +0000http://www.progtown.com/topic2072368-the-effective-decision-big-incompatible-slough.html<![CDATA[To distinguish the oblate data from the ciphered?]]>
http://www.progtown.com/topic2072369-to-distinguish-the-oblate-data-from-the-ciphered.html
Who clever, prompt Subj. At least archivers (and formats with the built in compression), but without the analysis of titles etc., only on character of the data. Except attempt to compress all given/is casual selected pieces an arithmetical compression - it is more than nothing on mind goes.]]>Fri, 06 Oct 2017 05:17:00 +0000http://www.progtown.com/topic2072369-to-distinguish-the-oblate-data-from-the-ciphered.html