Applied Algebra: Codes, Ciphers, and Discrete Algorithms is available from Amazon.
Applied Algebra: Codes, Ciphers, and Discrete Algorithms, Second Edition deals with the mathematics of data communication and storage. It includes hints for using Scientific Notebook®, Maple®, or MuPAD® to do complicated calculations and to make the mathematical ideas more accessible. Two central topics are data security (how to make data visible only to friendly eyes) and data integrity (how to minimize data corruption).
Cryptography is the study of data security: How can a bank be sure that a message to transfer $1,000,000 was sent by an authorized person? Or imagine a political crisis in a remote region of the world. It is vital that sensitive issues be discussed with government leaders back home. The crisis could get out of control if these discussions were intercepted and read by some third party.
The messages are bounced off of satellites so the signals can be captured by anyone with a simple satellite dish. How can the messages be transformed so that a third party cannot read them, yet they can easily be read by friends back home?
Issues of data integrity are handled by error-control codes. The first pictures transmitted from the back side of the Moon in the late 1960s were in black and white, and of poor quality. Lost data caused vertical black streaks in the pictures. The loss of data was due to interference from solar radiation. More recent pictures from much greater distances using the Voyager series of planetary probes were beautiful high-resolution color images with no apparent lost data. This was mostly the result of software that detects and corrects errors caused by interference.
This book discusses mathematically interesting methods for solving these problems—methods that are practical and widely used. The material was designed for a course in applied algebra for juniors and seniors majoring in mathematics and computer science.
The primary mathematical tools come from number theory and the theory of finite fields. All mathematics that will be used is developed as needed, but students who have had a prior course in abstract algebra or linear algebra have found such background to be useful.
Supercomputers perform billions of operations per second, and must store and retrieve vast amounts of data. The probability of a single read/write error is small, but doing billions of read/writes can make the probability of at least one error relatively large. Many computer codes (such as those required to do modern cryptography) will not tolerate even a single error. These fast computers must be designed so that errors—even multiple errors—can be recognized and corrected before causing trouble.
These examples reflect advances in hardware, but mostly advances in mathematics. Desktop computers can detect single errors and larger computers can correct multiple errors. The error-correction capabilities of the Voyager project resulted in thousands of flawless pictures being sent back to Earth to be analyzed.
Improvements in computer hardware since the 1950s have been incredible. In pushing technology to its limits, we are restricted by the physical size of atomic particles and the speed of light. Scientists are now considering the use of clean rooms in orbit to eliminate the few stray particles that contaminate Earth-bound labs.
In spite of these dramatic changes, the increase in speed due to improvements in mathematical algorithms has been even more spectacular. For many problems, the net effect since 1950 on computing speed due to improved algorithms has been greater than that due to improved hardware. (We will see an example of a problem in cryptography that would take more than 10¹⁰ years on the fastest theoretical computer that we could imagine using naive methods, but is computable in a few nanoseconds on a PC using more sophisticated algorithms.) This trend is likely to continue because mathematics itself recognizes no physical bounds.
We will look at several algorithms that arise in the study of cryptography and error-control codes. Many of these algorithms feature common-sense approaches to relatively simple problems such as computing large powers. Other algorithms are based on interesting mathematical ideas.
Those who become hooked on applied algebra will eventually need to learn abstract algebra, and lots of it. This book attempts to show the power of algebra in a relatively simple setting. Instead of a general study of finite groups, we consider only finite groups of permutations. Just enough of the theory of finite fields is developed to allow us to construct the fields used for error-control codes and for the new Advanced Encryption Standard. Almost everything we do will be with integers, or polynomials over the integers, or remainders modulo an integer or a polynomial. Once in a while, we look at rational numbers.
A floating-point number is different from a rational number or a real number. Each floating-point number corresponds to infinitely many rational numbers (and to infinitely many irrational numbers). Computer algebra systems such as Maple or MuPAD deal primarily with integers and rational numbers—not floating-point numbers.
Numerical analysis packages such as MATLAB® and IMSL use floating-point arithmetic. They trade precision for speed. Computer algebra systems are generally much slower than numerical analysis routines using floating-point arithmetic. When high precision is important—and it is essential for many problems in algebra—we have to go with computer algebra systems.