26

Re: Dispersion of length of archive?

Dima T;
We are returned in the beginning of a topic - to a question on, whether there are the algorithms compression allowing noticeably to raise a level of compression bit * by means of manufacture over them of great volume ("difficult") calculations **?
That is yours "very quickly and as a result the general time minimum turns out" is is opposite to volume that it is necessary to me.
____________________
* = "Bit sequences a la transaction " - I could tell, very feeblly understanding, however, that it for "a la".
** = Including any searches of variants; including, for example, "combinatorial" searches of variants of layout of bit subsequences in this bit sequence (as the unit creator has a lot of freedom in how to arrange transactions in the unit created by him; and even there is freedom in what transactions to include in the unit created by it).

27

Re: Dispersion of length of archive?

FXS wrote:

That is yours "very quickly and as a result the general time minimum turns out" is is opposite to volume that it is necessary to me.

You define that is necessary: to press or shaken to transfer. Whether the modest 1 Gigabit brings the big attention to the question "it is necessary to press?"

28

Re: Dispersion of length of archive?

Dima T;
It is necessary in the unit in the size 2 MB (from which size for some reason literally one month ago refused - with scandal! - to leave the community ) to push essentially more , than it is possible to push it (present ).

29

Re: Dispersion of length of archive?

FXS wrote:

S.G.;
I am perturbed with that the IN ITSELF calculations produced (in Bitkojne) for performance of "the complexity specification", are useless. Also I suggest to replace their with useful IN ITSELF the calculations consisting in the strong compression of the unit.

and than calculations are useful to compression? smile in this task they are useful exactly so, hash calculation is how much useful.

FXS wrote:

it is offered to Compress that bit representation of the unit in which it (unit) is spread on a network the Internet.

though, without a difference.
Compression is not necessary for , and that is why:
- Will learn what hash, having the initial data, it is possible only in one image - calculating. The reverse task - on a hash to learn the initiating data - is almost impossible, i.e. all the same it is necessary to sort out all set of the initial data. That is we not can to take simply a good hash for us, we should know (to show to all) and it . Thus, we receive the difficult task of obtaining of gold, forgive, . Why to us it is necessary difficult, I already wrote.
- On the other hand, a compression - an easy problem. It is known that structured data (low entropy) are well compressed and not reticulated, casual data () is badly compressed. Having  the data, approximately it is possible to learn not , good it is the data or not. Moreover, having any initial data, it is possible to make the recursive archive with a huge ratio  (truth it deduces initial bytes many times, but formally satisfies to conditions of the big compressibility)
So, no, compression is not necessary.

30

Re: Dispersion of length of archive?

FXS wrote:

We are returned in the beginning of a topic - to a question on, whether there are the algorithms compression allowing noticeably to raise a level of compression bit * by means of manufacture over them of great volume ("difficult") calculations **?

Algorithms of compression settled years  20 ago, and like anything new is not opened since then.

31

Re: Dispersion of length of archive?

FXS wrote:

Dima T;
It is necessary in the unit in the size 2 MB (from which size for some reason literally one month ago refused - with scandal! - to leave the community ) to push essentially more , than it is possible to push it (present ).

it already the general with calculation of hashes has no anything. With these yours PoW and PoC about which speech was above.

32

Re: Dispersion of length of archive?

S.G. wrote:

- will learn what hash, having the initial data, it is possible only in one image - calculating. The reverse task - on a hash to learn the initiating data - is almost impossible, i.e. all the same it is necessary to sort out all set of the initial data. That is we not can to take simply a good hash for us, we should know (to show to all) and it . Thus, we receive the difficult task of obtaining of gold, forgive, . Why to us it is necessary difficult, I already wrote.

-- I hope that you understand (simply do not finish speaking) that _____ - namely to it and is reduced ""  in Bitkoine - is necessary there ONLY for restriction of speed of arrival in system of new units and for what another.
(And I am not assured completely not that Nakamoto foresaw what tera-heshovyj  will be untwisted in Bitkojne by greedy Chineses by an outcome of the first decade...)
Hesh_s_bolshim_kolichestvom_vedushchih_nulej is invented Nakamoto implementation ____ .
But, generally speaking, probably, other implementations of this task are possible also. I suggest to consider as that - the task (over-) the strong compression. (Where the prefix "over-" is added in the advance payment - considering greed of Chineses already known to us now.)

33

Re: Dispersion of length of archive?

I the following Friday lifted process optimization arguing .
But I am afraid that yet in a subject. And besides it will be not clear about what clients to go speech
And about what currencies.

34

Re: Dispersion of length of archive?

S.G. wrote:

a compression - an easy problem

- And here I open in Total Commander the tab of adjustment of archiver Zip and there "the compression Level" is offered to be selected from section from 0 to 9, and 1 is designated as "fast"... Do not prompt, why?
And if 1 - "fast", 9, probably, "slow"?... And why it slow - because the processor simply works with a disposition to laziness?

35

Re: Dispersion of length of archive?

By the way (in a brainstorming session mode) why as other implementation ____ not to consider, for example, the task of training of a neural network? Really, to train NANOSECOND - it is difficult (and _ to train - _). And to check up that the network is well trained - easily.

36

Re: Dispersion of length of archive?

FXS wrote:

it is passed...
- And here I open in Total Commander the tab of adjustment of archiver Zip and there "the compression Level" is offered to be selected from section from 0 to 9, and 1 is designated as "fast"... Do not prompt, why?
And if 1 - "fast", 9, probably, "slow"?... And why it slow - because the processor simply works with a disposition to laziness?

?
After all it is clear that the more the compression level, the longer (i.e. more slowly) will go process of compression...
All is logical.

37

Re: Dispersion of length of archive?

d7i;
The big level of compression it not such "easy problem", how a small level of compression means all the same?

38

Re: Dispersion of length of archive?

FXS wrote:

By the way (in a brainstorming session mode) why as other implementation ____ not to consider, for example, the task of training of a neural network? Really, to train NANOSECOND - it is difficult (and _ to train - _). And to check up that the network is well trained - easily.

To train not difficult. NANOSECOND it is simple a box with inputs and outputs.
Difficult here that:
- To select architecture of a network (the double-layer NANOSECOND with logistical activation of an output is classics but there are also others,
, Grossberg,  e.t.c.) and to justify why it is that.
- To define that there are inputs.
- To define representative sampling for training and control (here the majority - use cunning.  or
Pen-train a network. "Adjust to the answer").
- To give sense to values on an output. Too as a matter of fact it is connected to the previous all points.

39

Re: Dispersion of length of archive?

FXS wrote:

it is passed...
- And here I open in Total Commander the tab of adjustment of archiver Zip and there "the compression Level" is offered to be selected from section from 0 to 9, and 1 is designated as "fast"... Do not prompt, why?
And if 1 - "fast", 9, probably, "slow"?... And why it slow - because the processor simply works with a disposition to laziness?

1. 9 is most likely a dial-up of vectors of pre-sets for different algorithms.
No linear dependence between digits is present. Most likely authors simply
Empirically picked up them and solved that it conditionally "depth levels" archiving process.
C all is regulated by the size of the dictionary for LZW algorithm. And it is possible still
Includes other phases of handling.  the dictionary.
If to look in adjustments 7zip and WinRAR there there will be generally whole families of algorithms
With different adjustments. Perhaps RAR the richest on adjustments .

40

Re: Dispersion of length of archive?

Really we seriously consider a question, whether is the strong compression more _ the task, than feeble compression?!

41

Re: Dispersion of length of archive?

FXS wrote:

really we seriously consider a question, whether is the strong compression more _ the task, than feeble compression?!

they are almost identical in computing sense smile))
Difference, as well as marked  - in value of buffers (dictionaries) and in other fine details.
Accordingly a difference in value of a compression and compression time - a couple of percent, no more.
And, I will repeat, there is no magic algorithm for a compression which long-long would think and would compress 10GB a film in pair kilobyte.
If it is interesting to you to learn that such "challenging task", (means really difficult), esteem about public key cryptography. Well there, the task of a finding of factors of number,  from 1000 digits, in that case when it represents product of two prime numbers.
In comparison with it, compression is how to collect a children's puzzle from 6 parts.

42

Re: Dispersion of length of archive?

S.G.;
1. I not the professional in a data compression problematics.
2. I do not need compression in 5 000 000 times ("10GB... In pair kilobyte"). 10 will be enough, 100 - a fantasy.
3. "Leaders of a rating - PAQ8PX and WinRK - exceed 7-Zip in compression on 28 % and 24 % accordingly, but spend for package much more time [10]." (https://ru.wikipedia.org/wiki/7-Zip)

43

Re: Dispersion of length of archive?

FXS wrote:

3. "Leaders of a rating - PAQ8PX and WinRK - exceed 7-Zip in compression on 28 % and 24 % accordingly, but spend for package much more time [10]." (https://ru.wikipedia.org/wiki/7-Zip)

I think that all of them are based on the certain assumption rather
The nature of the compressed data. If the assumption is erratic - that these archivers
Are useless.

44

Re: Dispersion of length of archive?

mayton;
Well, they check "models" available for them and) what works for a compressed file use that. What in it of bad?

45

Re: Dispersion of length of archive?

FXS wrote:

S.G.;
1. I not the professional in a data compression problematics.
2. I do not need compression in 5 000 000 times ("10GB... In pair kilobyte"). 10 will be enough, 100 - a fantasy.
3. "Leaders of a rating - PAQ8PX and WinRK - exceed 7-Zip in compression on 28 % and 24 % accordingly, but spend for package much more time [10]." (https://ru.wikipedia.org/wiki/7-Zip)

all is simple.
some archivers, look as they compress the same unit, (that you wanted) make a label-comparison, lay out here.
Let's look smile

46

Re: Dispersion of length of archive?

S.G.;
Anything, what all existing archivers are oriented on fast performance of the task (on "feeble" home machines), and it is required to me beyond all bounds difficult - by an amount of calculations - compression?

47

Re: Dispersion of length of archive?

The old man Shannon laughs loudly over us.

48

Re: Dispersion of length of archive?

On the first page of a topic someone mentioned that it is possible to compress very strongly the message transferred on a network if on both sides (the sender and receivers) has identical volume enough "dictionary".
I pay your attention that in blokchejn-networks each participant just has something identical and rather volume - namely itself . It is necessary to adjust only it as "dictionary".
Greetings to Shannon!

49

Re: Dispersion of length of archive?

I am afraid that this dictionary is useless.

50

Re: Dispersion of length of archive?

... If simply not to be able to prepare it.