[CST-2] Security - Concept of Perfect Secrecy

Mike Pinna mike@tropic.org.uk
Sun, 26 May 2002 09:36:19 +0100 (BST)


On Sun, 26 May 2002, Matt Shreeve wrote:

> Slide 81 of the Security notes defines perfect secrecy in terms of entropy
> (H(P)=H(P|C)), and then proves Shannon's implication (H(K)>=H(P)).  Does
> anyone know (or, better, provide a proof) that the reverse implication is
> true? ie
>
> H(K)>=H(P) => H(P)=H(P|C)

In words, the statement proved in the notes says: For an arbitrary
algorithm, if it maintains perfect secrecy (so knowledge of the
output (ciphertext) doesn't decrease the entropy of the input
(plaintext)) then the information content of the additional data used
in computing the output (the key) must be greater than that of the
input.

This is a claim about an algorithm that has already been proven to have
perfect security properties - a strict requirement I'm sure you'll
agree.

Reversing the implication, you ask whether the following is true: For an
arbitrary algorithm, if the entropy of some additional data it is given
is greater than that of the main input, then the algorithm maintains
perfect secrecy of the plaintext as defined above.

Hopefully it's clear that this is false.  The key word to look at is
`arbitrary'.  As a trivial counterexample, consider the function that
`encrypts' a single bit by taking a 1024 bit key and... just returning
the original bit.  If your reversed implication were true, this would
maintain perfect secrecy!  Clearly it does not, so it is false.

Hope that helps,

Mike

-- 
"...like the bloated examiner moving in with the madman."
      --Anon