#### Topic: About one paradox of information entropy

We take a file compressed very good (in a limit - ideal) the archiver. Contents of this oblate file can be considered as bit string S of length N. "Good compression" means that information entropy of allocation of bits this line is strongly minimized (in an ideal - shown to the minimum value corresponding to "an information amount" in the source file subjected to compression).

Small entropy means, in particular, that if we cut a considered line S on serial "words" of identical length of M and we make the frequency word book of these words not only all set of possible "words" of length of M will be present at this dictionary, but also the dispersion of frequencies of these words will be minimum. That is - essentially smaller, than dispersion of frequencies in similarly prepared casual same bit string of lengths N.

It is possible to tell that frequencies of "words" in S "are well aligned".

We take now the random-number generator (), we select the arbitrary starting value (not too long from bit record) and we create on the basis of sequence of the values returned by it - since ()

-- (Pseudo-) a casual line Z the same length N.

Let's add S and Z. The turned out line R, generally speaking, so will be (pseudo - casual, as well as a line Z. That is frequencies of "words" entering into it will not be aligned at all. And it means that, running it (Z) through the same good archiver, we can compress it in addition. Also there are good chances to hope that "scoring" (in a bit count) from this compression will be more than length bits of number designation . And even (it is possible, especially if N it is great) more than length bits of our algorithm which has been written down on any "economical" byte-code...

It is possible to say explicitly, at last, that, knowing only , we always can repeatedly generate a line Z and, subtracting it from R, to receive initial S.