In this note I want to show you how your understanding of groups can give you insight into
encryption algorithms and how multiple applications of an encryption might (or might not!)
enhance the algorithm's security. Much of what I have to say here could be applied to any
encryption algorithm; the theory I will present treats the algorithm like a black box.
Shall we get started?
Most "conventional" (not public key) encryption algorithms process their input in blocks. A
common block size is 64 bits. The algorithm transforms the input block into an encrypted output
block where the exact transformation depends on the key. (View my ASCII art drawings with a
fixed width font).
input (64 bits) > E > output (64 bits)
^

key
The size of the key is an important quantity. If it is too small, the algorithm can be "brute
forced" by simply trying every possible key until the one that works is found. The DES algorithm
uses a 56 bit key which, by today's standards, is considered inadequate. Notice, however, that
the input and output block sizes are independent of the key size. For example, some algorithms
use 128 bit keys yet they still might process the input in 64 bit blocks.
Okay... an encryption, E, can be considered a mapping from the domain of 64 bit numbers to the
codomain of 64 bit numbers. This mapping is onetoone and onto. In particular:
+ Every number in the domain has an image in the codomain. If this were not true then E couldn't
encrypt certain input blocks. That's not acceptable for any realistic encryption algorithm.
+ Every number in the codomain has exactly one preimage in the domain. If this were not true
then there would be certain output blocks that couldn't be decrypted unambiguously (there
would be two or more possible decryptions). That's also not acceptable for any realistic
encryption algorithm.
Another way of saying this is: an encryption, E, is a permutation of the set of 64 bit numbers.
For example here is a fragment of such a permutation. A complete table would contain 2^64
entries.
domain maps to codomain
0x0000000000000000 > 0x129F3C9273DE90823
0x0000000000000001 > 0xF822010484AA83120
0x0000000000000002 > 0x01183D71F90012BCC
etc...
The table has every possible 64 bit number on the left side and every possible 64 bit number (in
a different arrangement) on the right side. The precise permuatation E provides depends on the
key. If you switch to a new key you get a different permutation.
Now, how many permutations of 2^64 items are there? The answer is 2^64! (factorial). That is a
large number. Very large. VERY, VERY LARGE. (Did I mention that it was large?) In particular, it
is much larger than the 2^56 possible keys used with, for example, the DES algorithm. Of course
many of the permutations are poor encryptions. For example, one of them (exactly one) is the
identity permutation that maps every value onto itself. Clearly that is a poor encryption. Then
there is the permutation that is just like the identity permutation except that it exchanges 0
and 1. Then there is the permutation that just exchanges 0 and 2, etc. There are 2^64 such
permutations... and that's just getting started! However, the number of possible permutations is
so vast that those permutations that don't significantly "scramble" the order of the numbers is
only a tiny fraction of the total.
Now, let S be the set of all permutations of 2^64 items. It has 2^64! elements. Let the
operation o (circle) be composition of mappings. That is if a and b are permutations then
a o b
is the permutation you get when you apply permutation b to the result of permutation a. It can
be shown that S together with o forms a group. In particular, there is closure... applying
multiple permutations in a row gives the same result as applying some other single permutation.
Now suppose you thought that a 56 bit key was too small. You might try to improve matters by
double encrypting with two different keys. Like so
input > E > E > output
^ ^
 
k1 k2
Here k1 and k2 are two different keys. However, since E is really a permutation double
encrpyting with two keys is just the composition of two permutations. If I let "k1" and "k2" be
the names of the two permutations, then what I'm doing above is the same as
k1 o k2
BUT... the encryption E only supports 2^56 permutations... a tiny subset of all possible
permutations (very tiny). The question that must be asked is this: is E closed under the
composition of mappings?
Suppose E was closed. In that case, my double encryption with k1 and k2 would be EXACTLY the
same as encrypting once with some other key, k3. This is the meaning of closure, after all.
k3 = k1 o k2
In that case, double encrypting would offer no additional security AT ALL. An attacker would
just compute the key k3. In fact, encrypting hundredsor even thousandsof times wouldn't
help at all. Because of closure even a thousand applications of E (with a thousand different
keys) would yield a result that could have been obtained with a single application of some other
key.
On the other hand, if E is not closed, then applying E twice might produce a permutation that
did not orginally exist in the 2^56 permutations defined by E on its own. In that case multiple
encryptions could increase security by providing a larger effective key size.
When DES's 56 bit key was getting too small for comfort people began to think about encrypting
with DES multiple times. However, when this was first done it was not known if DES was closed or
not. I hope you can understand why this is a very important question. If DES was closed, any
system that used multiple encryptions would be no more secure than a single encryption system.
Attempts to increase security in that way would be misguided.
After many years of effort it was finally proved (in 1992, I believe) that DES is NOT closed.
Multiple DES encryptions do, in fact, increase the effective key length. However, this is a
nontrivial conclusion and it was not easily derived.
As an aside... using DES twice is not considered worth the trouble. Due to an attack method
called the "meet in the middle" attack, double DES is only twice as hard to break as single DES
in terms of the number of operations required. That is, the effective key length is increased by
only one bit. (The meet in the middle attack against doubleDES does require the construction of
a table of 2^56 64 bit blocks... a requirement that would be challenging to meet)[1]. However,
the meet in the middle attack doesn't work if you encrypt three times. Thus triple DES... the
application of DES three times in a row (with different keys) does increase the effective key
length to the expected size. Triple DES is widely used.
Note that most other encryption algorithms have not been studied as closely as DES. As far as I
know, it is not known if, for example CAST or TWOFISH or is closed
or not. These other algorithms have long keys to start with so the need for applying them
multiple times hasn't come up. What this means to you is this: don't assume that double
encrypting with, say, TWOFISH is going to make your data more secure. It's entirely possible
that it won't enhance the security at all. Not even a little. Zero.
Something to think about!
Peter
[1] See the Meetinthemiddle attack article on Wikipedia
http://en.wikipedia.org/wiki/Meetinthemiddle_attack