Swift by Numbers: The long and the short of it (Xcode 6.0.1)


By U.S. Navy (Office of Naval Research) [Public domain], via Wikimedia Commons

In everyday life we take numbers for granted, we can write a number of any length we like. It's just a case of putting one digit from 0 to 9 after another. And sure we can do this in a computer program using a string but if we want the device to process that number then we must remain within its boundaries of understanding.

As a practical example:
  1. Open the Calculator App on OS X
  2. Press Command + 2 (or go to the menu bar and select View->Scientific)
  3. Now hold down a single number (not zero!) and watch what happens after a while
You should see that zeroes start appearing. This is because the number has gone beyond what the computer can handle.

Objective-C vs Swift 

In Objective-C it was possible to use primitive C data types. These included: short, int, long and long long. In addition to their unsigned versions. These are now gone. Yes, CShort, CInt, CLong, CLongLong, etc. exist but these are simply type aliases of Int16, Int32, Int, Int64 and so on: this means they are interchangeable and it seems little more than nostalgia that would keep us using them.

Why not use 64-bit numbers all the time?

Most of the time you will use simply Int for all integers in Swift, which is a data type that grows and shrinks in length depending on the device. If you are running your app on a 32-bit device then the maximum value of the Int will be equal to the Int32.max value and on a 64-bit device it will be equal to Int64.max.

While it is possible to use 64-bit numbers on a 32-bit device by employing the Int64 type, Apple in their 64-bit Transition guide (published when the first 64-bit iOS devices were released) wrote: "Avoid assigning 64-bit long integers to 32-bit integers."

The reason for this warning (against what in C is written long long or in Swift CLongLong or Int64) is because a 32-bit device can process 4 bytes of data at a time and a 64-bit integer is twice this size (8 bytes) and so cannot be processed as a single piece of data. So unless an app requires the precision or size, then a 64-bit integer on a 32-bit device is unnecessary and will slow the operations being performed.

Don't forget about the science

These words "most of the time" and "avoid" don't mean that you should forget about the science and limitations of a 32-bit integer. The reason for this is that while a 32-bit device won't crash if a number above 32-bits is assigned to an Int, the behaviour will not necessarily be what you expect, as can be seen from this example:
var int:Int = 2_147_483_647+1
// -2,147,483,648 (32-bit device) | 2,147,483,648 (64-bit device)
int = 2_147_483_647*2
// -2 (32-bit device) | 4,294,967,294 (64-bit device)
If we really needed to perform this operation and get back the same result on both the 64-bit and the 32-bit device then it would be necessary to use Int64.

Float and Double

It is natural to want the maximum precision for your app and to always think of calculations needing to be as precise as they would be were they entered into a calculator, but this is not the reality of computing. The computing of values is still restricted by the length of numbers that can be represented in a given number of bits and if we force an "expensive" operation to use 64-bit numbers where they are unnecessary then there will be a noticeable slowdown.

Given this information, it might seem strange that while Int is the inferred type for integers in Swift and also the recommend type, that there is not an equivalent for floating-point numbers in Swift. Double (64-bit; equivalent of double, type alias of CDouble) is the inferred type and Float (32-bit; equivalent of float, type alias of CFloat) is the other option. And logic would tell us that if 32-bit devices find it harder to work with 64-bit numbers then we should look to something akin to the functionality available to integers.

But, this logic would ignore the greater complexity of floating-point numbers and the way in which binary numbers are used to represent them.

Note: CGFloat, which is not a native Swift type, provides similar, interchangeable, behaviour as a Swift Int: it is adaptive depending on device (and is widely used in the "expensive" operation of drawing). But CGFloat is unlikely to be suited to our requirements outside of contexts where Cocoa insists on its employment. (See note in section below.)

Floating on Swift

Floating-point numbers are typically used for precision, and even 64 bits struggle to be enough. So unless floating-point numbers are going to use very few decimal numbers (6 or less), or need to be of relatively low precision, then 32-bit precision will not be enough and a 64-bit type should be employed. As Apple writes:
Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. (Apple Developer)
No mention is made here about restricting a 32-bit runtime environment to use of Floats and neither is mention made in Apple's (pre-Swift) 64-bit transition guide. This is because the larger focus should be on whether we need precision (Double) or speed (Float).

Consider this code:
155.000000000000007 > 155 // false
155.000000000000007 < 155 // false
155.000000000000007 == 155 // true
To the human eye the maths here is simply wrong. The first comparison should be true and all others false but 64-bits do not allow us the precision to distinguish between the two numbers and so they are considered the same. And if we were to work with 32-bit floating-point numbers then things are even more imprecise:
Float(155.000007) > 155 // false
Float(155.000007) < 155 // false
Float(155.000007) == 155 // true
For an explanation of the way in which floating point numbers are represented in binary, and why the relatively low-level of precision exists, see here. Note: It is important that our CoreGraphics friend, CGFloat, should not be thought of as a convenient way of taking the Double/Float decision out of our hands. The reason for this is because with the employment of Float (which is what CGFloat becomes on a 32-bit device) we are faced with the issue of 32-bit precision after only 6 decimal digits and this could quickly lead to differences on 64-bit and 32-bit devices where the larger context wasn't set for this to be appropriate. (This is in contrast to the case with both Int and UInt where the largeness of the values that must be exceeded makes it less of an issue that a 64-bit device should be working with 64-bit numbers and a 32-bit device be working with numbers of its own word size).

Further reading on transition between integer types

Swift Integer Overflow issue (Marcin Krzyzanowski, Medium)

The Swift Programming Language: The Basics (Apple Developer)

Further reading on floating-point numbers

Computer Representation of Floating Point Numbers (Michael L. Overton, PDF)

Computing with Floating Point Numbers (Michigan Tech)

Weird: Float behaviour.. can someone explain? (Reddit)

Use HUGE numbers in Apple Swift (StackOverflow)

Apple 64-bit Transition Guide (Apple Developer)

Appendix

32-bit device maximum values

CChar = Int8 127
SignedByte = Int8 127
 
CUnsignedChar = UInt8 255 
Byte = UInt8 255

CShort = Int16 32767

CUnsignedShort = UInt16 65535 
CChar16 = UInt16 65535

CInt = Int32 2147483647

CUnsignedInt = UInt32 4294967295

CLongLong =  Int64 9223372036854775807 

CUnsignedLongLong = UInt64 18446744073709551615 

CLong = Int 2147483647 
Word = Int 2147483647

CUnsignedLong = UInt 4294967295 

64-bit device maximum values

CChar = Int8 127
SignedByte = Int8 127
 
CUnsignedChar = UInt8 255 
Byte = UInt8 255

CShort = Int16 32767

CUnsignedShort = UInt16 65535 
CChar16 = UInt16 65535

CInt = Int32 2147483647

CUnsignedInt = UInt32 4294967295

CLongLong =  Int64 9223372036854775807 

CUnsignedLongLong = UInt64 18446744073709551615 

CLong = Int 9223372036854775807
Word = Int 9223372036854775807

CUnsignedLong = UInt 18446744073709551615 

Endorse on Coderwall

Comments