Extended Abstract:- In this talk an approach to analyzing encryption protocols using machine aided formal verification techniques is presented. When considering a secure network that uses encryption to achieve its security (as opposed to using only physical security) one must consider both encryption algorithms and encryption protocols. An encryption algorithm, such as DES or RSA, is used to convert clear text into cipher text or cipher text into clear text. That is, the unencrypted message (clear text) is enciphered using a particular encryption algorithm to produce the unreadable cipher text. Similarly, the same or a symmetric algorithm is used to recover the original clear text message from the encrypted cipher text. An encryption protocol is a set of rules or procedures for using the encryption algorithm to send and receive messages in a secure manner over a network. The approach presented in this talk does not attempt to prove anything about the strength of the encryption algorithms being used. On the contrary, it may assume that the obvious desirable properties, such as that no key will coincidentally decrypt text encrypted using a different key, hold for the encryption scheme being employed.

The idea of the approach is to formally specify the components of the cryptographic facility and the associated cryptographic operations. The components are represented as state constants and variables, and the operations are represented as state transitions. The desirable properties that the protocol is to preserve are expressed as state invariants and the theorems that must be proved to guarantee that the system satisfies the invariants are automatically generated by the verification system.

The results of applying this approach to two different examples are discussed. The first example is a single-domain communication system using dynamically generated primary keys and two secret master keys, as described in [MM 80]. The host manages the communication keys for the system, and the terminals communicate directly with the host system. Whenever a terminal wants to start a new session with the host, the host generates a new session key. If this key were sent to the terminal in the clear a penetrator tapping the line could intercept the key and decipher all of the messages for that session. To prevent this each terminal has a permanent terminal key that is used by the host to distribute the new session key to a terminal when a new session is initiated. That is, a terminal's session key is the primary communication key for that terminal. It is dynamically generated by the host for each session. The static terminal key is the terminal's secondary communication key. It is used by the host to encrypt new session keys for transmission to the terminal.

Both the terminal keys and the current session keys are stored at the host. However, because a penetrator can be an authorized user, it is unsafe to store the terminal and session keys in the clear at the host. Thus, they are stored in encrypted form. There are two data structures of interest in the host: the terminal key table and the session key table. The terminal key table is static; each entry in this table contains the unique terminal key for the corresponding terminal encrypted using a secret master key KMH1. Unlike the terminal key table, the session key table is a dynamic structure. This table is updated each time a new terminal session is started; there is one current session key per terminal. Each entry in the table contains the current session key for the corresponding terminal encrypted using a second secret master key KMHO.

No terminal key, session key, nor either master key is in the clear in the host. To store the two masters keys a cryptographic facility is connected to the host. This facility may be accessed only through the limited cryptographic operations that are provided. The operations provided by the cryptographic facility are encipher data (ECPH), decipher data (DCPH), and reencipher from master key (RFMK).

The second example discussed is a protocol due to Tatebayashi, Matsuzaki, and Newman [TMN 91]. The Tatebayashi-Matsuzaki-Newman (TMN) protocol is a key distribution scheme by which a pair of ground stations in a mobile communication system obtain a common session key, through the mediation of a trusted server. Messages to the server are encrypted in the server's public key, which is known to all ground stations, and ground stations generate conventional keys to be used as session keys.

The TMN protocol works as follows. The server possesses a public-private key pair. The public key is known to everyone in the system, while the private key belongs to the server alone.

1. When user A wishes to communicate with user B, it encrypts a random number with the server's public key, and sends the encrypted random number, along with its name and the name of B.

2. When the server receives the request, it decrypts the random number and stores it as key-encryption key for that conversation; it also notifies B that A wishes to start session.

3. User B, on receiving the notification from the server, generates a random number to be used as a session key, encrypts it with the server's public key, and sends it to the server.

4. The server decrypts the response, encrypts the key with A's random number using a private-key algorithm, and sends the result to A. In the specification of the protocol, the algorithm used for this step is commutative in the sense that, if A[B] represents the encryption of B under the key A, then A[B] = B[A]. This could be done with a bitwise exclusive-or of the two keys, for example. The commutativity is significant for analysis.

5. User A decrypts the message from the server using the original random number it had generated and assumes that the result is the session key.

A more detailed discussion of the first example can be found in [Kem 89]. The details of the second example along with a discussion of three different approaches to analyzing encryption protocols can be found in [KMM 94].

An advantage of the approach presented in this talk is that the properties of a cryptographic facility can be tested before the facility is built. First, the system is represented using a formal notation. The resulting formal specification is used to generate the necessary proof obligations that assure that the system satisfies certain desired properties. If the generated theorems cannot be proved, then the failed proofs often point to weaknesses in the system or to an incompleteness in the specification. That is, they often indicate the additional assumptions required about the encryption algorithm, weaknesses in the protocols, or missing constraints in the specification.

Another advantage of this approach is that the cryptographic facility can be analyzed assuming different encryption algorithms by replacing the set of axioms that express the properties assumed about the encryption algorithms with a new set of axioms that express the properties of a different encryption algorithm.

References

[Kem 89] Kemmerer, Richard A., "Analyzing Encryption Protocols Using Formal Verification Techniques," IEEE Journal on Selected Areas in Communication, Vol. 7, No. 4, May 1989.

[KMM 94] Kemmerer, Richard A., C. Meadows, and J. Millen, "Systems for Cryptographic Protocol Analysis," Journal of Cryptography, Vol. 7, No. 2, 1994.

[MM 80] Meyer, Carl H., and Stephen M. Matyas, Cryptography, John Wiley, 1980.

[TMN 91] Tatebayashi, M., N. Matsuzaki, D. B. Newman, "Key Distribution Protocol for Digital Mobile Communication Systems," in Advances in Cryptology - CRYPTO'89, LNCS 435, G. Brassard, ed. Springer-Verlag, 1991.