[CST-2] Kolmogorov complexity - cost of sorting
Jamie Shotton
jdjs2@cam.ac.uk
Fri, 31 May 2002 13:51:27 +0100
I seem to be missing something: What exactly are we trying to prove?
That we can't sort faster than nlogn for arbitrary data? And in the
argument below, what's to stop you substituting e.g. nloglogn for nlogn?
AFAICS that makes no difference to the argument but gives a different
result...?
Puzzled Jamie
> > Imagine we could create a program (constant Kolmogorov complexity)
> > which sorts in less than nlog(n). Consider the *inverse* of this
> > program (still constant) which takes a sorted list and
> outputs it in
> > the original random ordering in less than
> > nlog(n)
> > This would allow us to compress the original random
> permutation as the
> > sorted bits + progran(constant). But since it was a *random*
> > permutation, it's uncompressible! Contradiction -- so we
> can't sort in
> > less than nlog(n).
> >
> > This ignores the size of the constant of the program, but it still
> > works if you include that. (Basically, the program would need to be
> > able to sort in <<nlog(n)-c)