When I initially wrote the bytes for beginners series of posts I never got around to discussing the representation of negative numbers in binary or how we might use Swift to understand this.
So let's begin now with bit casting, which is the act of transforming a bit of one type into something of another type. Here's an example, we have a UInt8 (an unsigned 8-bit integer), which can store any number from 0 to 255 and we want to cast it to an Int8 (a signed 8-bit integer), which holds a number from -128 to 127.
But how do we arrive at this conclusion? Well it involves a process of NOT + 1. So to elaborate, 128 can be represented in binary as 10000000 and a NOT version of this would be 01111111 (or 127 in decimal; see earlier bytes for beginners posts for explanation). But a NOT + 1 would be 10000000, so 128 and -128 are identical in binary representation. Now let's look at the other end of the scale with -1, which is a NOT of 1 + 1, i.e. 1 == 00000001 and so -1 == 11111111.
The problem is that if we try and represent a minus number as a binary string in Swift then it defaults to using a minus sign rather than two's complement.
And now we can see the two's complement representation of any Int8 number whether it is positive or negative.
So let's begin now with bit casting, which is the act of transforming a bit of one type into something of another type. Here's an example, we have a UInt8 (an unsigned 8-bit integer), which can store any number from 0 to 255 and we want to cast it to an Int8 (a signed 8-bit integer), which holds a number from -128 to 127.
unsafeBitCast(UInt8(128), Int8.self) // -128 unsafeBitCast(UInt8(255), Int8.self) // -1What is happening here is easiest explained by Dr. Math:
The numbers 0 to 127 are positive; numbers 128 to 255 represent -128 to -1.So if we think of this in binary terms
String(128, radix:2) // 10000000 String(255, radix:2) // 11111111then 10000000 in what is known as a two's complement representation (or Int8's language) is -128, where in the language of UInt8 it represents 128. While 11111111 in two's complement is equal to -1, but in an unsigned world it is 255.
But how do we arrive at this conclusion? Well it involves a process of NOT + 1. So to elaborate, 128 can be represented in binary as 10000000 and a NOT version of this would be 01111111 (or 127 in decimal; see earlier bytes for beginners posts for explanation). But a NOT + 1 would be 10000000, so 128 and -128 are identical in binary representation. Now let's look at the other end of the scale with -1, which is a NOT of 1 + 1, i.e. 1 == 00000001 and so -1 == 11111111.
The problem is that if we try and represent a minus number as a binary string in Swift then it defaults to using a minus sign rather than two's complement.
String(Int8(-2), radix:2) // -10 String(~2 + 1, radix:2) // -10But we can get around this by writing our own twosComplement() method:
func twosComplement(num:Int8) -> String { var numm:UInt8 = 0 if num < 0 { let a = Int(UInt8.max) + Int(num) + 1 numm = UInt8(a) } else { return String(num, radix:2) } return String(numm, radix:2) } twosComplement(Int8(-127)) // 10000001 twosComplement(Int8(127)) // 01111111
And now we can see the two's complement representation of any Int8 number whether it is positive or negative.
Int16 and Int32
We could keep going and see that larger Integers follow the same basic pattern of having their largest minus value equal to their max + 1 (which is the same as half of the UInt max + 1). Like so:
unsafeBitCast(UInt16(UINT16_MAX/2), Int16.self) // 32,767 unsafeBitCast(UInt16(UINT16_MAX/2 + 1), Int16.self) // -32,768And likewise we can write methods to support their translation:
func twosComplement(num:Int16) -> String { var numm:UInt16 = 0 if num < 0 { let a = Int(UInt16.max) + Int(num) + 1 numm = UInt16(a) } else { return String(num, radix:2) } return String(numm, radix:2) } twosComplement(Int16(-10)) // 1111111111110110 func twosComplement(num:Int32) -> String { var numm:UInt32 = 0 if num < 0 { let a = Int(UInt32.max) + Int(num) + 1 numm = UInt32(a) } else { return String(num, radix:2) } return String(numm, radix:2) } twosComplement(Int32(-10)) // 11111111111111111111111111110110
Comments
Post a Comment