Two different types of computers: analog and digital. In analog, numbers are represented by physical quantities of something, that, when measured, correspond to the number in question. For instance, voltage. You push 1 volt through, and it’s measured as equaling the number 1. From there, components compute basic arithmetic ( + – * / ).
On digital machines, numbers are represented as decimal digits, the same as when we write them. Each decimal digit is represented by a system of ‘markers.’ A marker could be an electric signal in a wire, or its absence, for example. So, this marker can have two states (present, not present).
1 marker = 1, 0 = 2 combinations.
2 markers = 00, 01, 10, 11 = 4 combinations (not enough)
3 markers = 000, 001, 010, 100, 011, 101, 110, 111 = 8 combinations (not enough)
4 markers = 16 combinations (more than enough!)
So, for each decimal digit, four markers must be used. Markers could be a group of four wires, where the electric pulse is present or not present. Or they could be electric pulses with positive or negative polarity, or could be other systems. So the minimum needed is four markers. And these pulses would be controlled by various gates. Another option is a ten-valued marker represented, say, by ten wires, each either being on or off! Overkill, but works.
Vacuum tubes, and transistors are types of digital machines.
If a number is being passed around a machine temporally, it can either be done so simultaneously (parallel), or in serial (one after another).
The author expresses these in binary, and I do not understand, particularly, what he’s talking about. From what I gather, the point of his discussion is that the basic operations ( + – * / ) are performed through lengthy, repetitive logical processes. We’re so used to doing these calculations in our heads and on paper that the complex nature of the processes are obscured.