1

Topic: About one paradox of information entropy

We take a file compressed very good (in a limit - ideal) the archiver. Contents of this oblate file can be considered as bit string S of length N. "Good compression" means that information entropy of allocation of bits this line is strongly minimized (in an ideal - shown to the minimum value corresponding to "an information amount" in the source file subjected to compression).
Small entropy means, in particular, that if we cut a considered line S on serial "words" of identical length of M and we make the frequency word book of these words not only all set of possible "words" of length of M will be present at this dictionary, but also the dispersion of frequencies of these words will be minimum. That is - essentially smaller, than dispersion of frequencies in similarly prepared casual same bit string of lengths N.
It is possible to tell that frequencies of "words" in S "are well aligned".
We take now the random-number generator (), we select the arbitrary starting value  (not too long from bit record) and we create on the basis of sequence of the values returned by it  - since  ()
-- (Pseudo-) a casual line Z the same length N.
Let's add  S and Z. The turned out line R, generally speaking, so will be (pseudo - casual, as well as a line Z. That is frequencies of "words" entering into it will not be aligned at all. And it means that, running it (Z) through the same good archiver, we can compress it in addition. Also there are good chances to hope that "scoring" (in a bit count) from this compression will be more than length bits of number designation . And even (it is possible, especially if N it is great) more than length bits of our algorithm  which has been written down on any "economical" byte-code...
It is possible to say explicitly, at last, that, knowing only , we always can repeatedly generate a line Z and, subtracting it from R, to receive initial S.

2

Re: About one paradox of information entropy

Something not so. Well oblate sequence just is as much as possible similar to the casual.

3

Re: About one paradox of information entropy

Here at once I will declare: I do not see paradoxa - to hope only it is possible.
Generally, it is finite, if on a series of experiences there is a density  F (S AND Y), it m. It is rather various and is not connected with individual density (it is speculative speaking).
On the other hand, for a long time already was such:
We build dictionary 1 of "text", cutting it on slices of the length restricted as a whole (analog ).
We build the text of links on 1
We build dictionary 2 of the text.
We build the text of links on 2
......
On the first iterations we have quasicombinatorial explosion.
Further - , similar to a triangle. Its height and completeness depends from initial "" "text". The "noise", the tr-to  and above is more white, and than it is more recurrence - that more low and less pregnant.
Last "text" = the unique link. Here and compression. And whether is paradox, .
Wrote "text" - since in a computer all can be reduced to a placenta-sti {0,1}

4

Re: About one paradox of information entropy

Barlone;
In casual sequence of frequency of "words" are arranged on any Poisson (on ). Means there are the words meeting (it is noticeable?) is more often or more rare than an average.
And prefix coding, for example, specially is engaged in that aligns (as it is possible is better) frequencies of words. With the registration of lengths, it is finite: all words in length of 8 bits should have frequencies, in an ideal, equal 1/256; and words in length of 10 bits - equal 1/1024...

5

Re: About one paradox of information entropy

FXS wrote:

. "Good compression" means that information entropy of allocation of bits this line is strongly minimized

on the contrary, maximized.
Ivan, you would esteem something about all it, at last.

6

Re: About one paradox of information entropy

S.G.;
Yes, thanks. Attention all: "entropy is minimized" to read (above) as "entropy is maximized".

7

Re: About one paradox of information entropy

S.G.;
I am simple the physicist on the initial. And there - - entropy increases in thermodynamics spontaneously and that it to reduce, it is necessary specially effort to put.

8

Re: About one paradox of information entropy

FXS wrote:

in casual sequence of frequency of "words" are arranged on any Poisson (on )

Yes from what? Uniformly.

9

Re: About one paradox of information entropy

In sense "uniformly" it as?

10

Re: About one paradox of information entropy

https://ru.wikipedia.org/wiki/__

11

Re: About one paradox of information entropy

Barlone;
Words - uniformly. And their frequency (that is an amount identical in sampling) -  yes, on the Poisson.

12

Re: About one paradox of information entropy

FXS wrote:

Attention all: "entropy is minimized" to read (above) as "entropy is maximized".

After perusal "paradox" absolutely disappeared.

13

Re: About one paradox of information entropy

Sokolinsky Boris;
So it is possible words in length 15 with frequencies from the right wing of the Poisson (them, the 15-alphabetic Poisson) to recode words of length 14 with frequencies from the left wing of the Poisson (them, the 14-alphabetic Poisson).
And on the contrary, that is it is mutual?

14

Re: About one paradox of information entropy

To read:

FXS wrote:

(them, the 14-alphabetic Poisson) ?

15

Re: About one paradox of information entropy

By the way I use mnemonics: increase  is interpreted as chaos increase, as conducts to  noise. Any structurization conducts to reduction .
I suppose that alignment - as end in itself is necessary only for enciphering, and for compression the total is necessary. It will not be simple it to mean  the oblate code. I do not remember, in Haffmane like just   it is used.

16

Re: About one paradox of information entropy

Well and actually to paradoxu. It - in heads. Since - carried all to optimal "to transmission " at communication channel level. Moreover probably implying what to transfer algorithm together with the code it is not required and that probably this algorithm - a thing static.
We have for a long time global networks + a network stack + possibility . For a long time it is not necessary on  any more as at an icon to look.
I.e. an optimality always .. Rather  .

17

Re: About one paradox of information entropy

By the way, in a subject of maximization/minimization of entropy: from two messages of identical length it will be better to be compressed ("the good" archiver) in what it is more than the information? Or, on the contrary, in what it is less than the information?

18

Re: About one paradox of information entropy

In E Th sense I suppose that as a whole there, where specific  above.
If ,   in a known phrase?
"Do not trust the one who frightens you of the bad weather in Switzerland. Here it is very solar and warmly."
And in the deformed?
"... In Limpopo. There  HooooLoooooooDNooooooo."

19

Re: About one paradox of information entropy

exp98, that message in which specific ("on unit of length")  above, - I correctly understood you will be better to be compressed?
And as it is connected "" to arguing of comparative compressibility (a pliability to compression) lines (messages) of type resulted by you: " There  HooooLoooooooDNooooooo contra] There is very cold [/i]", for example?

20

Re: About one paradox of information entropy

Boris wrote:

Barlone;
Words - uniformly. And their frequency (that is an amount identical in sampling) -  yes, on the Poisson.

And, well it is valid.
Only it does not help. No archiver compresses casual sequence.

21

Re: About one paradox of information entropy

FXS wrote:

we Take now the random-number generator (), we select the arbitrary starting value  (not too long from bit record) and we create on the basis of sequence of the values returned by it  - since  (
-- (Pseudo-) a casual line Z the same length N.
Let's add  S and Z. The turned out line R, generally speaking, so will be (pseudo - casual, as well as a line Z. That is frequencies of "words" entering into it will not be aligned at all. And it means that, running it (Z) through the same good archiver, we can compress it in addition. Also there are good chances to hope that "scoring" (in a bit count) from this compression will be more than length bits of number designation . And even (it is possible, especially if N it is great) more than length bits of our algorithm  which has been written down on any "economical" byte-code...

it will not be compressed. The result too will be close to a casual dial-up.
You can , in difference from other your algorithms this is simple in implementation:
1. Took any oblate file. i = 0
2. Initialized  value i.
3.  a file with
4. Shook (for example in ZIP)
5. If result in the size less ideal (before found) remembered result as ideal.
6. If i <... That i ++ and item 2
I think there will be decisions hardly less initial, but reduction will be within 1 %.

22

Re: About one paradox of information entropy

FXS wrote:

so it is possible words in length 15 with frequencies from the right wing of the Poisson (them, the 15-alphabetic Poisson) to recode words of length 14 with frequencies from the left wing of the Poisson (them, the 14-alphabetic Poisson).

It all the same what repeatedly to archive.

FXS wrote:

By the way, in a subject of maximization/minimization of entropy: from two messages of identical length it will be better to be compressed ("the good" archiver) in what it is more than the information? Or, on the contrary, in what it is less than the information?

the Information - the many-valued term.

23

Re: About one paradox of information entropy

Ivan FXS, I here that thought: if you are right, i.e. your idea the worker it is possible to press everything indefinitely. Well at least in tens times. For example the distribution kit  to compress to 1 MB, let even 100 MB, I think for the sake of it MS would launch search for couple of months.
Output: if it worked, it already would use.

24

Re: About one paradox of information entropy

Dima T;
Therefore also "paradox".

25

Re: About one paradox of information entropy

Boris wrote:

it is passed...
It all the same what repeatedly to archive.

Problem in how at such recoding to do without "the transfer dictionary", or with such dictionary which would not superimpose (the length) all effect of compression.

Boris wrote:

the Information - the many-valued term.

Feel a dirty trick: it turns out that absolutely casual bit sequence is not compressed (though personally the hope) that in it is extreme a lot of information still decays!