For CS365, our standard computational model will be the Random Access Machine or RAM. This model consists of a CPU with some constant number of registers which can hold integer values large enough to be used as addresses 1, together with a large random-access memory. We think of the RAM as being programmed in an abstract version of assembly language. The following operations all take constant time:
- Basic arithmetic operations such as addition, subtraction, multiplication, integer division with quotient and remainder, testing for equality, testing if one number is greater than another, etc.
- Reading or writing a single location in memory whose address is given in one of the registers.
- Control operations like jumps, conditional jumps, etc.
A short way to think about this is that anything that can be done without using a function call or a loop in C is probably a constant time operation.
Operations that involve more than a constant number of registers or memory locations cannot be carried out in constant time.
In practice we will not try to program these devices in any particular abstract assembly language; instead we will write algorithms in English or pseudocode, and use our understanding of what operations take constant time on a RAM to check any claims we might make about the running time of various low-level parts of the algorithms.
A RAM can simulate a TuringMachine and vice versa, but a TuringMachine simulating a RAM in the most natural way may be slowed down by a factor of as much as O(n log n), where n is the size of the RAM's memory. The reason is that while a RAM can read any memory location in constant time; but a TuringMachine might have to walk all the way from one end of its tape to the other (taking Theta(n log n) steps given that it may take Theta(log n) tape cells to store one Theta(log n)-bit memory location) to read a location that is far away from the last one.
Formally we'll require that each register or memory location holds at most O(log n) bits, where n is the size of the memory (i.e. number of memory locations); this is enough to address any memory location, but avoids certain kinds of cheating where arithmetic on absurdly large numbers is used to perform fast parallel computations. (1)