We humans and our meat computers don’t have any trouble recognizing the sign of a number. If there is a minus sign, “-,” in front of a number, that number is negative. If a number is prefixed by a plus sign, “+,” or, the more likely case, has no prefix at all, then the number is positive.
Computers of the silicon kind don’t have it so easy. They don’t have the luxury of pluses and minuses to tell them the sign of a number. All they have are zeros and ones, the alphabet of binary systems. So what is their solution?
Since the introduction of digital computing, researchers, mathematicians, and nerds with nothing better to do have debated the best way to represent negative numbers within the limitations imposed by binary encoding. For signed integers, three representations rose above the din, and have since established themselves as the de facto encodings. None of them is perfect. Each has its own strengths and weaknesses. By examining each representation, we can get a feel for the scenarios in which using one may be preferential to the others. More importantly, understanding how signed numbers are encoded gives us invaluable insight into how microprocessors work.
Signed Magnitude Representation
This is also sometimes known as the “sign and magnitude” representation. It is the simplest of the representations, and not to belittle those who came up with it, but this one seems like an obvious solution.
Ordinary unsigned integers are represented by a string ones and zeros. The smallest unit of addressable memory is traditionally the byte, or 8 bits. The unsigned integers a single byte can represent are to .
is obviously decimal or .
is decimal , or .
So a single byte can encode 256 decimal integers: to .
Figuring that the sign of a number was itself just a binary state, ie., either positive or negative, the creators of the signed magnitude representation went with what they knew: they chose to use one bit, in this case the leftmost, most significant bit (MSB), to denote the sign of the number, and the remaining bits continued to represent the magnitude of the number as before. Just to shake things up a little, they picked 1 to represent a negative sign and 0 to represent a positive sign.
With this representation, a single byte could encode to . With only 7 bits, the highest magnitude that can be represented now is , so the range of integers covered by a single byte is to .
A more subtle, and slightly insidious, side effect of this representation is that there are now two ways to encode the number 0: and . We now have both positive and negative zero.
Early computers used this representation for integers, but most have moved on to those that follow. Signed magnitude is still often used for floating point numbers, however.