Hello, Tyomchik, you wrote:> I would tell that it is a philosophical question which can be applied to any method/model. And whether the neural network can predict the price? And whether the linear regression can predict the price? Those> I so understood that was not present, cannot. This piece allows some qualifiers in one, more exact, so? I.e. in the theory it is possible to feed 2 in a gradient and to receive more exact model? No, can, if such regularities are in learning samplings and its capacity is proportional to complexity of the predicted phenomenon. It is impossible to take two neural networks and to receive the best final algorithm. But it is possible to take one neural network and everyone subsequent to train, so it compensated errors of the previous. Idea in it, instead of in simple join of several qualifiers. > The general answer - if the phenomenon which we to predict, has objective laws of development and if the data which we for a prediction, are relevant and describe pacing factors which influence the phenomenon the model (including gradient boost) can predict such phenomenon. Those> Talk is cheap. Whether can gradient boost predict price USD/EUR for the next day? Look, I can repeat Jeffrey words, or a question on a question: and on what learning sampling?> Yes, there under a cowl the method of graded-index descent for training is used. There are many techniques - for example, cross validation, random subsampling, to set different learning rate, besides to use ensembles of models etc. Those> I.e. from a box cannot struggle, so? It from a box is not required to it. The method guarantees that the magnification of number of basic qualifiers increases its accuracy. All these technicians - optimization called for smaller number of steps to construct more exact model. And partly to compensate an inaccuracy of the selected loss function. These methods generally are applicable to all methods of machine training. Whether those>>> the gradient reinforcement learning Can?>> Looking whether what consider under "can". As I understand, reiforcement learningn is as a matter of fact model adjustment under the new data. Nobody hinders to retrain model anew when fresher data arrives. If to look, can be even there are methods to update coefficients models by means of a new portion of the data, but anew to train model in me it will seems easier. Those> I not about conversion training anew, and about models to a changing surrounding, aspiring to minimize an error. It is possible to add final algorithm in the portions of qualifiers expressing new precedents, but the error imported by prior qualifiers will collect and to grow computing complexity. Therefore it is possible, but it is not practical.