#### Topic: Laziness vs. Calculation when the data is accessible

Haskelja much admirers mark that laziness is one of its most interesting singularities. However and its many problems result from laziness. When the data becomes accessible only when to them calculations address become difficultly understood and it contradicts idea of ghost effects at calculations. And, what if instead of lazy model of calculations in which data accessability is defined by their necessity to apply model of calculations in which calculation it is defined by availability of the input data? We here did one DSL for typification and made such model of calculations. The result was very interesting. Language behaves as lazy, but there are no problems with ghost effects as calculations  as become at once accessible the data. The general idea such: A = B.C + D;//A depends from B.C and D so this branch will be fulfilled last B.C = D;//B.C depends from D so this branch will be fulfilled by the second D = 42; //D on what does not depend so the branch will be fulfilled by the first the Majority of properties of such language are similar "pure" lazy : 1. Variables are calculated once. 2. The sequence of calculations is defined by dependences. However as calculations happen at once as dependences will be accessible, there are no problems with ghost effects, debugging, etc. Unique problems with its implementation this creation of the effective machine for its execution or its conversion to the effective code. If there are thoughts on its effective implementation, ask to share (a dependence graph not to offer as it is too much in storage it is necessary to hold and change it on the move ).

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> We here did one DSL for typification and made such model of calculations. The result was very interesting. Language behaves as lazy, but there are no problems with ghost effects as calculations  as become at once accessible the data. The general idea such: In my opinion, all bild-scripts work by this principle. The order of calculations is not completely determined, as topological sorting is generally ambiguous.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, nikov, you wrote: N> In my opinion, all bild-scripts work by this principle. It is clear. Not  why it not to use for ? N> the Order of calculations is not completely determined, as topological sorting is generally ambiguous. Why, it is ambiguous? That that the graph can be sorted by a miscellaneous does not mean that it should be sorted by a miscellaneous. Using one algorithm it is received identical (predicted) sequence of performance. And, that calculations not , allows to fulfill something with ghost effects in a proper place and to address to already  to the data from the imperative code.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> When the data becomes accessible only when to them calculations address become difficultly understood and it contradicts idea of ghost effects at calculations. How?" Idea of ghost effects at calculations "- it about other aspects. And under that aspect which most likely you mean, the stop for example gets at sudden garbage collection. VD> the general idea such: VD> VD> A = B.C + D;//A depends from B.C and D so this branch will be fulfilled last VD> B.C = D;//B.C depends from D so this branch will be fulfilled by the second VD> D = 42;//D on what does not depend so the branch will be fulfilled by the first Keywords for search: spreadsheet, reactive programming, future/promise and as above already told - topological sorting. VD> However as calculations happen at once as dependences will be accessible, there are no problems with ghost effects, debugging, etc. loses a principal singularity of laziness - reduction of algorithmic complexity which for example allows to set the infinite lists. Or for example obtaining of several algorithms from one code -  sorting and the linear algorithm of a choice. VD> Unique problems with its implementation this creation of the effective machine for its execution or its conversion to the effective code. And what there will be besides expressions and how it will look?

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: N>> the Order of calculations is not completely determined, as topological sorting is generally ambiguous. VD> why, it is ambiguous? That that the graph can be sorted by a miscellaneous does not mean that it should be sorted by a miscellaneous. Using one algorithm it is received identical (predicted) sequence of performance. And what for predicted execution order? Code optimizers rearrange instructions in places as it more conveniently if it does not change semantics. Moreover, for example in a C ++ the order of calculation of arguments of functions is not defined: f (foo (), bar ()).

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> And, what if instead of lazy model of calculations in which data accessability is defined by their necessity to apply model of calculations in which calculation it is defined by availability of the input data? The combinator of the fixed point will manage to be made? fix:: (a-> a)-> a fix f = let a = f an in a through laziness  in advance unknown argument, and further it is untwisted through iterations. On this idea in Haskell the recursive calculations, including, in monad IO (it actually not such and imperative as it normally call) work. A class piece! Without it it will be absolutely uninteresting

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> And, what if instead of lazy model of calculations in which data accessability is defined by their necessity to apply model of calculations in which calculation it is defined by availability of the input data? VD> we here did one DSL for typification and made such model of calculations. The result was very interesting. Language behaves as lazy, but there are no problems with ghost effects as calculations  as become at once accessible the data. The general idea such: VD> VD> A = B.C + D;//A depends from B.C and D so this branch will be fulfilled last VD> B.C = D;//B.C depends from D so this branch will be fulfilled by the second VD> D = 42;//D on what does not depend so the branch will be fulfilled by the first Badly I remember a lambda numeration, but. It like differs nothing from simple strategy of reductions "changeover on value" which is applied practically in all . In your example you refused as though "the preliminary declaration" values before usage. In  there other strategy, that "lazy". I do not want to get into a jungle, because . Simply sight from apart an unaided sight.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> We here did one DSL for typification and made such model of calculations. The result was very interesting. Language behaves as lazy, but there are no problems with ghost effects as calculations  as become at once accessible the data. The general idea such: With ghost effects you strongly got excited. This model does not admit them. VD> however as calculations happen at once as dependences will be accessible, there are no problems with ghost effects, debugging, etc. With debugging really strongly it is better.... <<RSDN@Home 1.2.0 alpha 5 rev. 62>>

#### Re: Laziness vs. Calculation when the data is accessible

Hello, dsorokin, you wrote: D> the Combinator of the fixed point will manage to be made? To us to go, instead of . And to go quickly. For the task especially practical.... <<RSDN@Home 1.2.0 alpha 5 rev. 62>>

#### Re: Laziness vs. Calculation when the data is accessible

Hello, WolfHound, you wrote: WH> With ghost effects you strongly got excited. WH> this model does not admit them. Very much even admits. And I did them. Here the main thing that ghost effects are fulfilled during expected time. In lazy model the ghost effect simply will not be fulfilled, if the calculation is not used, and in it will be fulfilled. Actually as well as in lazy model there is a problem with data accessability. But here very simply to allocate ghost effect there where the data are already accessible. It is still important that after such model of value become calculated and the data structure calculated by such work (in our case AST) can be analyzed the normal imperative code. In purely lazy model we could not make it, as the imperative code already would demand to produce the necessary calculations.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> very much even admits. And I did them. Formation  ghost effect is not. For the result  characters  does not depend on the order. If you used them for something else if strategy of calculations changes, fireworks begins.... <<RSDN@Home 1.2.0 alpha 5 rev. 62>>

#### Re: Laziness vs. Calculation when the data is accessible

Hello, WolfHound, you wrote: WH> Formation  ghost effect is not. And here lists? Here object state change - is. And I permanently do them. Plus it is possible to deduce stupidly something in the console, etc. WH> For the result  characters  does not depend on the order. What difference that on what depends, if I change a state of objects? WH> if you used them for something else if strategy of calculations changes, fireworks begins. Anything anywhere does not begin. The ghost effect is precisely fulfilled strictly after calculation of those or other values. I can save the calculated value in a file or to send on the press.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: WH>> Formation  ghost effect is not. VD> and here lists? What? I about lists generally spoke nothing. VD> here object state change - is. And I permanently do them. Which? VD> plus it is possible to deduce stupidly something in the console, etc. In  too it is possible. There unsafePerformIO is. Here only it is better not to use it. WH>> for the result  characters  does not depend on the order. VD> what difference that on what depends, if I change a state of objects? Big. If change is character adding in  that of problems is not present. Merge of characters at us associative and commutative. Accordingly the result does not depend on execution order. If you use operations which do not possess associativity and commutability wait for a trouble. As it is impossible to read this data while  all changes are produced. On it  the character in  should it is produced at one stage, and all other readings from  only on the following. VD> anything anywhere does not begin. The ghost effect is precisely fulfilled strictly after calculation of those or other values. I can save the calculated value in a file or to send on the press. The execution order at us  is not defined and can in the future changes. The modern strategy of calculations is optimal or close to optimal in most cases. But the worst case at us  is sad. O (N^2) from an amount of properties in a tree. So algorithm change quite probably.... <<RSDN@Home 1.2.0 alpha 5 rev. 62>>

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: N>> In my opinion, all bild-scripts work by this principle. VD> it is clear. Not  why it not to use for ? I to that it not new idea. It is used in specific languages and , there where it is to adequately executable task: MSBuild, Excel, TPL Dataflow. It seems very applicable for IDE, there where it is necessary to fulfill difficult incremental multiphase analyses on permanently changing input data (the source code in the editor). You suggest to make it performance strategy by default for general purpose languages? It would be interesting to look at examples of implementation of any algorithms (in the pseudocode), and to see, what it offers advantages before traditional vigorous or lazy strategy. Short of record? Speed of performance? Speed of compilation? Saving of consumed storage? Convenience of debugging? Refactoring simplification? Magnification of clearness of the code? Reduction of risk of logical errors?

#### Re: Laziness vs. Calculation when the data is accessible

Hello, Kodt, you wrote: In if to break , declaring that laziness is optional supposedly calculate for it but only hope, - and properly to work over the optimizing compiler and the clever environment of execution... Then it will not be necessary to study a difference between foldl and foldl ' I have for a long time an idea which would be desirable to try (can be, someone already tried?) . It seems to me that it can give a potential scoring by multinuclear machines for some algorithms. Though heard judgement that the implementation overhead projector moves any scoring. We imagine pure functional language, where by default calculations lazy. We in the code have branching points, for example a pattern  for some expression, or the operator if (which same pattern   expressions on constants True, False is). In points of branching we are forced to force calculation of supervising expression (on which result the choice of the conditional branch should depend). As language pure calculations of various conditional branches do not depend from each other. Therefore, it is parallel with calculation of supervising expression, we can start calculate speculatively all or a little (depending on presence of the free kernels) the conditional branches. In case calculation of supervising expression comes to an end earlier, and the choice of the conditional branch becomes known, we interrupt not finished calculations of the branches which have become unnecessary, and we continue performance of the selected branch. In case calculation of any branch demands speeding up of expression evaluation which depends on even unknown variables participating in not completed pattern , performance of the given branch stops before end of calculation of supervising expression and end a pattern , and the kernel is released for speculative calculation of other conditional branch (which can be from the same, or other point of branching).

#### Re: Laziness vs. Calculation when the data is accessible

Hello, nikov, you wrote: N> I have for a long time an idea which would be desirable to try (can be, someone already tried?) . It seems to me that it can give a potential scoring by multinuclear machines for some algorithms. Though heard judgement that the implementation overhead projector moves any scoring. Something similar did in practice (analysis of huge and rather original broad gull-file not to waste time it was favourable to keep in the memory at once some hypotheses, which periodically ""). In my opinion, such precalculations are favourable at achievement of two conditions: 1. The general combinatorial complexity is less than amount of accessible kernels. 2. Expenses for branch precalculation more than time thread switch, and, at the same time, are small enough, that the idea generally made sense. If these conditions are not fulfilled, then it is necessary to look at reactive model of calculations moreover that well laid down on green flows. . aside multiagent data mining with all singularities and delights.  dedicated there will be a piece, in other words. On the other hand, for everyones data mining/olap here there is nothing new, search of correlations  was from time immemorial. As a rule, as the whipping boy for "clever" approaches, but nevertheless was. And even sometimes struck back, as in usage of a university cluster first in the history for something useful Somehow so

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> We here did one DSL for typification and made such model of calculations. The result was very interesting. Language behaves as lazy, but there are no problems with ghost effects as calculations  as become at once accessible the data. The general idea such: VD> VD> A = B.C + D;//A depends from B.C and D so this branch will be fulfilled last VD> B.C = D;//B.C depends from D so this branch will be fulfilled by the second VD> D = 42;//D on what does not depend so the branch will be fulfilled enough big project first At me all it is penetrated by similar calculations through IObservable <T>. Conditionally it turns out so: this. WhenAnyValue (t => t. B.C, t => t. D).Select ((c, d) => c + d).BindTo (this, t => t. A); this. WhenAnyValue (t => t. D).BindTo (this, t => t. B.C); D = 42; It perfectly works, plus it is possible to add different convenient counters like Throttle, Timeout etc., etc. If it turns out to make from this DSL it will be abruptly hefty.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, nikov, you wrote: N> You suggest to make it performance strategy by default for general purpose languages? Yes. I do not understand more precisely why till now it did not make. Can any contra-indications is. N> it would be interesting to look at examples of implementation of any algorithms (in the pseudocode), and to see, what it offers advantages before traditional vigorous or lazy strategy. Already the prototype works for us. However, meanwhile it not finished. Now such language is accessible to calculation of values on AST, and we want, that it was accessible and on characters. But it is already possible to look at examples. N> short of record? The short approximately same as well as at any another , but here semantics closer  turns out. In other, it is faster a question of design of language. At us, by the way, in language is not present ,  and other functional heresy. Instead at collections properties of their elements are doubled and give the same functionality. N> Speed of performance? Speed of compilation? With speed of performance the big question. We generate  the code (consider machine), but thus there is some code which increases volume of calculations a little. Plus at us is a problem caused by extensibility. We AST should support extensibility (including in ) so we cannot build a dependence graph, and we do the partial orderliness and we calculate properties bypass in all branches AST. But, it turns out very bright. Speed of compilation same as well as any other compiled language. If to do the interpreter it is possible to construct simply a dependence graph and to produce its topological sorting. Further bypass of the graph produces all calculations. N> saving of consumed storage? In comparison with Haskelem, yes. In comparison with normal language there will be small unprofitable questions. Though it, more likely, a computational model and implementation question. N> convenience of debugging? Undoubtedly! In difference from purely imperative languages in language on the basis of such computational model it is possible it is possible to implement the functional purity with controllable imperativeness. In difference from pure  languages of any problems with debugging (though , though a debugger though visualization of calculations (as in Eksel)) is not present. Thus to achieve that separate properties were calculated it is lazy simply enough. Well, and to enter into such language analog of lazy lists not especially difficult. Whether a question it only is necessary in most cases? In my opinion, not so. It is better to have is easy-accessible laziness applied in proper places. N> refactoring simplification? It not especially depends on a computational model. N> magnification of clearness of the code? Undoubtedly. N> reduction of risk of logical errors? And it too. Perhaps it is better to describe our language. At once I will make a reservation that it . In it it is impossible even to create objects explicitly, and types of objects are rigidly restricted by branches AST (still characters in the near future will be supported). So it is not necessary to search in it for something . It is a simple language which should simplify sharply calculations on AST (typification). The general idea of language that at us turned out consists that at us is properties which it is possible to appropriate exactly once. Properties share on three groups: 1. In-properties. They can be set from the outside of object. 2. Out-properties. They can be set from within object. 3. Inout-properties. As a matter of fact they form pair (in and out) properties, but in many places are interpreted and processed together. Properties are dependent. If one property depends on value of another, and that is not calculated yet calculation of dependent property to be postponed all its dependences will not be calculated yet. And dependences can be between properties of different nodes AST. In language there are collections of elements AST of the given type which describe structure of a current node. If collection elements have properties in a collection there are properties with the same names. They allow to process lists without cycles,  and an other hogwash complicating the code. Thus such pseudo-stvojstva as are dependent. Here descriptions of a projection of properties for lists: 1. If the collection element has an in-property the list has a property with the same name and type. At assignment of this property value is appropriated to similar properties of all stored in the object list. 2. If the collection element has an out-property the list has a property of type the list both the inclined name and member type corresponding to type of property of storable object. 3. If the collection element has an inout-property the list has an inout-property with the same name and type. At assignment of value of an in-part of property it is value appropriates in an in-part of property of the first object stored in a collection. When there is accessible a value of an out-part of property of the first object, it is located in an in-part of property of the second object and so on, the last collection object will not be reached yet. The out-part of property of the last collection object is returned as value of an out-part of property of all collection. Thus calculation dragging through all collection objects that is similar to convolution at the left on the right in the functional languages is led. Convolution from right to left was not necessary to us, but to make it not difficult. Here the code of binding for C# written with usage of ours DSL on the basis of dependent properties. Dependent properties are selected by the fat. ast Reference {in Scope: Scope; out Symbol: Symbol2 = Scope. TryBind (this);} asts QualifiedReference {stage 1: in Arity: int = 0; in Scope: Scope; out Symbol: Symbol2; | Simple {Name. Scope = Scope; Symbol = Utils. TryResolveTypeOverload (Name. Symbol, Arity); Name: Reference;//Structurally property. Can contain other dependent properties.} | Aliased {Name. Scope = Scope; Symbol = Utils. TryResolveTypeOverload (Name. Symbol, Arity); Alias: Reference; Name: Reference;} | Qualified {Qualifier. Arity = 0; Qualifier. Scope = Scope; Name. Scope = Qualifier. Symbol. Scope; Symbol = Utils. TryResolveTypeOverload (Name. Symbol, Arity); Qualifier: QualifiedReference; Name: Reference;} | Generic {Arguments. Arity = 0;//all properties Arity of all objects from list QualifiedName are initialized. Arity = Arguments. Count; QualifiedName. Scope = Scope; Arguments. Scope = Scope; Symbol = QualifiedName. Symbol; QualifiedName: QualifiedReference; Arguments: QualifiedReference *;} } Objects in such language can be created not initialized and to initialize them on a course of calculations. Thus it is not necessary to be afraid that someone addresses to not initialized properties. It simply not can happen such calls will are postponed in time until then not to be calculated yet values of dependences.

#### Re: Laziness vs. Calculation when the data is accessible

Hello, ionoy, you wrote: I> If it turns out to make from this DSL it will be abruptly hefty. What means it turns out? Already. Only it at us meanwhile narrowly specialized. We use it for calculations on AST. Naturally, under a cowl any IObservable, closings and other expenditure of storage. On object there is only one superfluous bit field. The code of calculations looks as normal: declarations GenericType: Type {out Symbol: GenericTypeSymbol; stage 1: out BodyScope: Scope; Modifiers. FlagsIn = Modifiers. None; TypeParameters. IndexIn = 0; TypeParameters. PrevTypeParameters = Utils. TryGetPrevTypeParameters (this); Members. Parent = Symbol; BodyScope = Symbol. MakeBaseTypesScope (this. Scope); TypeBase. Scope = BodyScope; TypeParameterConstraints. Scope = BodyScope; Members. TypeScope = BodyScope;

#### Re: Laziness vs. Calculation when the data is accessible

Hello, Evgeny. Panasyuk, you wrote: EP> And what for predicted execution order? At least for the same debugging. EP> code optimizers rearrange instructions in places as it more conveniently if it does not change semantics. EP> moreover, for example in a C ++ the order of calculation of arguments of functions is not defined: f (foo (), bar ()). Aha. Here only it is not clear why in its successors (C# and Java) the calculation order has been accurately defined in the specification?

#### Re: Laziness vs. Calculation when the data is accessible

Hello, WolfHound, you wrote: WH> Hello, dsorokin, you wrote: D>> the Combinator of the fixed point will manage to be made? WH> to us to go, instead of . WH> And to go quickly. For the task especially practical. The recursive calculations just  are very convenient for practice. If it is is more specific, it is so-called "the recursive notation do". I use at myself for short determination of ordinary differential equations, for example. At the heart of implementation of the recursive calculations the idea about the combinator of a fixed point lies. This generalization of idea. And it is implemented through laziness. As usual, in Haskell the general mechanism unlike simplified Scala and F# where too there is something similar is implemented. These are all particulars. Now more fundamental. Whether "new" strategy of calculations implies, what language should be ssylochno-transparent? If yes, it inevitably conducts to that requirement that language should be also pure. Or referential transparency on a side?

#### Re: Laziness vs. Calculation when the data is accessible

Hello, VladD2, you wrote: VD> And, what if instead of lazy model of calculations in which data accessability is defined by their necessity to apply model of calculations in which calculation it is defined by availability of the input data? Most close it to out-of-order execution in cpu VD> However as calculations happen at once as dependences will be accessible, there are no problems with ghost effects, debugging, etc. and on me on the contrary - if in lazy  the order of calculations is dictated by the data (calculating a*b, we recursively plunge into calculation and and b) in such language calculating x, we suddenly start to calculate hundreds the variables depending on it, and also in the ascending order. Really it to someone will be more clear?? The same with ghost effects - no technical issues with them in  are present, they are forbidden for the ideological reasons, that  a floor-mat. Concept of calculation and . The concept of performance at last, implementation (which I could invent) demands for each variable to create the dynamic list of variables depending on it directly, plus dynamic appropriated lambda for calculation and a sign "is already calculated". Not similar that it is more effective than laziness as a whole, in comparison with laziness quits less conveniently for debugging, demands more calculations (we calculate all that is possible, instead of all that is necessary), the order of calculations as  (it is necessary to distinguish determinancy of model and specific implementation), and implementation will be similar less effective but it is original on what I congratulate