A file compressor is great for shrinking stored files, but it depresses me whenever I see a file grow instead of shrink. So what I am looking for is a file compression algorithm that never inflates any files, although it is allowed that some files (not all of course!) have the same length after "compression". Ideally it should work on files of all sizes, but I would be satisfied with a compressor that operates only on files larger than 1MB.
Can you provide such an algorithm? No programming knowledge is required for this problem.
Yet another proof... Assume that, for all files, the zipper either compresses them or leaves them alone, and there is at least ONE file that gets compressed; let's call it X0.
Consider now the zipped version of X0: X1=ZIP(X0). Clearly it must be smaller: |X0|>|X1|. Now, X1 may or may not be compresseable. If not, the UNZIP program, when given X1, wouldn't be able to decide whether to leave it alone or to unzip it into X0. Thus, X1 must be compresseable.
Consider now the zipped version of X1: X2=ZIP(X1), and |X1|>|X2|. By a similar argument, X2 must be compresseable, and we'd get X3=ZIP(X2), X4=ZIP(X3).... with |X0|>|X1|>|X2|>....
The Xn sequence should never stop, but the lenghts of the Xn files eventually must reach zero, so the sequence MUST stop. As these two facts are contradictory, our initial assumption was wrong.
|
Posted by e.g.
on 2006-10-20 08:23:45 |