Topic: hashgraph effective multi master replication.
https://hashgraph.com/the Author of algorithm does a series of rather courageous announcements. And studying algorithm I to it I trust. The code did not look. 1) steadily works if 2/3 nodes work correctly and can transfer messages each other. Remaining can do everything, including frank sabotage. Correctness of operation is proved . 2/3 fair nodes it is a theoretical minimum. 1/2 about which adherents tell error or intended boxes. 2) throughput of hundred thousand messages in a second. It is restricted by a network and speed of operation of the slowest node. And if it chokes, the remaining network does not note loss of the fighter. 3) a time delay of a fraction of a second. Grows proportionally Log (N) where N an amount of nodes in a network. 4) the Volume of the control footing necessary for consensus acceptance is insignificant in comparison with pay load volume. The algorithm interface can be reduced to a black box which copy lives on each node and creates magic. The box has a queue of outgoing messages. In it the client puts the message which wants to send and a temporal label which according to the client should be at this message. The network can produce to the message other temporal label. And queue of entering messages having following properties: 1) If one client received the message - means this message all clients in a network received. 2) if the client did not receive the message - means this message anybody did not receive. 3) all clients receive messages in one order. Those if one client sees m1, m2, m3 that and all remaining see the same. 4) each message has an identical temporal label on all nodes. The temporal label is coordinated by all network. Having message queue dataful properties multi master replication everything becomes it is trivial. My observations: the Author it is strengthened it is protected from spiteful hackers. Because of it at it all is covered by a thick layer of cryptography. But it not always is necessary. And to the algorithm it is not necessary for operation. If operation goes strictly in the entrusted network the cryptography can be thrown out. Thereby increasing throughput also reduces a time delay. The time delay is counted for a case when nodes transfer messages in a random way. If in a network to add pseudo-leaders these are especially popular nodes with which other nodes communicate especially often. No additional functionality except popularity at pseudo-leaders is present. That can be reduced a time delay to Log (Log (N)). If the pseudo-leader one and remaining nodes put in order to it each message, and he sends all messages to each node the time delay becomes a constant. In case of death of pseudo-leaders the time delay degrades to Log (N). Also I have a suspicion that presence of pseudo-leaders accelerates consensus acceptance. Thereby reducing time delays even more strongly. But it needs to be checked separately. On remaining properties of a network pseudo-leaders of influence do not render. But here it is necessary to be careful that pseudo-leaders did not die from loading. In case of a casual message exchange loading on nodes is arranged uniformly.... <<RSDN@Home 1.0.0 alpha 5 rev. 0>>