#### Topic: Calculation on GPU

I welcome, I have zero experience of programming of videocards, and knowledge on this point in question at me too is close to zero. If the question seems axiomatic, or silly, I ask not to kick strongly. It is necessary it makes sense to understand how much to use GPU instead of CPU for the following task. There is rather simple iterative algorithm. The algorithm fulfills the order of hundred iterations and operates with type numbers double. On an input the algorithm receives some tens numbers, on an output has the order of 10 numbers. On good CPU the algorithm is completely fulfilled approximately for 400 microseconds for one dial-up of input parameters. This algorithm is necessary for running for different dial-ups of input parameters. Number of dial-ups - from 100 to 5000. All data sets are independent and accessible simultaneously (in operative storage). The task consists in finishing recalculation of all data sets as soon as possible. Questions: whether It is possible this calculation on GPU? How many dial-ups it will be possible to consider simultaneously? What acceleration it is possible to expect in comparison with CPU which enumerates all data sets sequentially one after another? Where it is possible to expect bottlenecks and problems? Where it is possible to find a code sample which does something similar? The task purely mathematical also does not concern display something on the screen. Thanks