A file compressor is great for shrinking stored files, but it depresses me whenever I see a file grow instead of shrink. So what I am looking for is a file compression algorithm that never inflates any files, although it is allowed that some files (not all of course!) have the same length after "compression". Ideally it should work on files of all sizes, but I would be satisfied with a compressor that operates only on files larger than 1MB.
Can you provide such an algorithm? No programming knowledge is required for this problem.
e.g, I've just looked at Wolfram to try to understand your view point (bijection).
Are you trying to be the "devil's advocate" in full flight to contradict that such technology can exist?
I am not currently in a position to understand your premises (in layman terms) other than realise that I cannot map a 1:1 situation and expect it to shrink.
During these thoughts my mind has clarified a few issues; I can create a reference set, list it and refer the file to that reference each time it reoccurs; ie, I've just found "out in the" for the first time so I add it to my reference list at "xx,yy" in my data leaving "xx,yy" in that part of the sequence. The next time that phrase occurs it is replaced with "xx,yy".
|
Posted by brianjn
on 2006-10-20 09:31:24 |