Neurons function as basic logical organs, and basically digital organs: if a neuron requires only one incoming pulse (stimulator) to produce a response, then it is an OR organ; if it requires two incoming pulses, then it is an AND organ. These two, along with simulating “no” can be combined in various ways into any complex logical operation.
“Natural componentry favors automata with more, but slower, organs, while the artificial one favors the reverse arrangement of fewer, but faster organs.” Thus “the human nervous system will pick up many logical or informational items, and process them simultaneously,” while a computer “will be more likely to do things successively. . . or at any rate not so many things at a time.” The nervous system is parallel, while computers are serial. But the two cannot always be substituted for one another—some calculations must be done serially, the next step must follow the one previous to it, while other calculations done parallel, to be done serially require immense memory requirements.
Comprises “active” and “memory” organs (he’s including “input” and “output” as part of “memory).
Active organs: perform basic logical actions, sense coincidences and anticoincidences, and combine stimuli, regenerate pulses to maintain pulse shapes and timing via amplification of the signals.
These functions were performed by (in historical succession): relays, tubes, crystal diodes, ferromagnetic cores, transistors, or by combinations of those.
A modern machine will contain 3,000-30,000 active organs, of which 300-2,000 are dedicated to arithmetic, and 200-2,000 to memory. Memory organs require further organs to service and administer them—the memory parts of the machine being around 50% of the whole machine.
Memory organs are classed by their “access time”—the time to store a number, removing the number previously stored, and the time to ‘repeat’ the number upon ‘questioning’ (that is, write/read times, respectively). To classify the speed, you could either take the larger of those two times, or the average of them. If the access time doesn’t depend on the memory address, it is called “random access” (RAM).
Memory registers can be built of active organs—which, while fastest, are also most expensive (i.e,. built out of vacuum tubes). Thus, for large-memory operations, it’s cost-prohibitive. Previously, relays were used as the active organs, and relay registers were used as the main form of memory.
It is possible, however, to reduce the required memory to solve a problem by considering not the total numbers needed in memory, but the minimum needed in memory at any given time. And if that can be determined, numbers can be distributed between faster memory, and slower memory, based on when they are needed—that is, perhaps all the numbers can be stored on the slower memory, while the necessary numbers of the moment are stored on the faster memory. I assume this is how computers now function—everything is stored on the hard drive, while the absolutely necessary things to the current operations are stored in the RAM.
Magnetic drums and tapes are currently (1950s) in use, while magnetic discs are being explored (and now, 2015, becoming obsolete in favor of SSDs).
Inputs are punched cards or paper tapes, outputs are printed or punched paper—that is, means for the machine to communicate with the outside world.
Words are saved directly to named numerical addresses within the memory of the machine—the address is never ambiguous.
Some machines are combinations of digital and analog parts that may have a common, or may have separate, controls. In the case of separate controls, the controls must communicate with each other, and require D/A and A/D organs. D/A means building up a continuous quantity from the digital expression (i.e., digitally, 10, so analogue, buildup 10V), while A/D means measuring a continuous quantity for digital expression.
Possible description of floating point resolution using a “pulse density” system where number is more or less precise depending on how large it is?
Logical Control is essentially how one determines the fixed order to perform logical functions to produce the desired result. This is programming the machine.
So you have the numerical data that is input, you have the organs that perform basic mathematical functions, you have instructions as to the order in which those functions are performed, and you then have the output data.
Beyond being able to perform basic functions, the machine must also have the capacity to perform a sequence/logical pattern to solve mathematical problems, which is actually the goal in the first place. So in analog machines there must be enough components that can perform basic functions as needed for the calculation. They must be connected to each other in a fixed setting for the duration of solving the problem.
It is possible to have only one component for each basic function, but this then requires that there be another organ in the machine to remember numbers—it must be able to remember a number, replace the number in its memory, and also repeat upon being questioned for it. This is called ‘memory’ and the totality of memory organs is a ‘memory register’ and is the main ‘mode of control’ for digital machines.
There are a number of possibilities for controlling this sequence through what is an iterative approach—I’m assuming through a series of IF/THEN sorts of controls.
In “memory stored control”, rather than having physical controls, the controls are stored in the machine’s memory—essentially, the “program” is no longer a physical thing (i.e., a punchcard), but can be stored in the same way that numbers can be stored—within the machine itself.
What makes this remarkable further is the numbers and programs can begin manipulating themselves from within the machine.
These types of controls can be combined, so that a memory stored control could consist of orders that reset plugged (physical) controls, change plugged control setups, and control of the machine over to the plugged “regime”. By the same token, the plugged “scheme” should be able to hand back over control of the machine to the memory stored control scheme at some point in its sequence.
Two different types of computers: analog and digital. In analog, numbers are represented by physical quantities of something, that, when measured, correspond to the number in question. For instance, voltage. You push 1 volt through, and it’s measured as equaling the number 1. From there, components compute basic arithmetic ( + – * / ).
On digital machines, numbers are represented as decimal digits, the same as when we write them. Each decimal digit is represented by a system of ‘markers.’ A marker could be an electric signal in a wire, or its absence, for example. So, this marker can have two states (present, not present).
1 marker = 1, 0 = 2 combinations.
2 markers = 00, 01, 10, 11 = 4 combinations (not enough)
3 markers = 000, 001, 010, 100, 011, 101, 110, 111 = 8 combinations (not enough)
4 markers = 16 combinations (more than enough!)
So, for each decimal digit, four markers must be used. Markers could be a group of four wires, where the electric pulse is present or not present. Or they could be electric pulses with positive or negative polarity, or could be other systems. So the minimum needed is four markers. And these pulses would be controlled by various gates. Another option is a ten-valued marker represented, say, by ten wires, each either being on or off! Overkill, but works.
Vacuum tubes, and transistors are types of digital machines.
If a number is being passed around a machine temporally, it can either be done so simultaneously (parallel), or in serial (one after another).
The author expresses these in binary, and I do not understand, particularly, what he’s talking about. From what I gather, the point of his discussion is that the basic operations ( + – * / ) are performed through lengthy, repetitive logical processes. We’re so used to doing these calculations in our heads and on paper that the complex nature of the processes are obscured.