[TNO-logo]
Museum logo

The Control Data CDC 6400:
"The punchcard period"


The Control Data CDC 6400 (64 KWords). The console can be seen in the foreground.
At the right, two CDC 844-21 disk units and in the back CDC 659-magnetic tape units.

A Control Data 6400 system was installed in May, 1974 in the computer room of the then Physics Laboratory RVO-TNO. The Control Data 6600-system was developed in 1964 as a "supercomputer" by Seymour R.Cray (architect) and James E.Thornton (detailed design). The Control Data 6400 was introduced by Control Data in 1965 as a simpler and 2.5 times slower version of the 6600. The Control Data 6400 was leased for its second "life period" to the Physics Laboratorium RVO-TNO.

The system was - at that time - very fast. The configuartion comprised with two tape-units, one card reader - able of reading 600 cards per minute - and one chain printer (thinking about it, I still see the dansing characters). The architecture of the Control Data 6000 series computers was an interesting model for studies of the newly invented Petri-nets as well as computation optimalisation techniques.

Practical jokes using punch cards

The users of the system had to hand in their card storage bins to the card reader operator at the I/O-counter. (see also: Everything about punch cards). The operator took the deck of cards, tamped the deck neatly, put it into the card readers' hopper, started the reader and removed the cards from the stacker. Then, the cards deck or the bin was returned. The operator was also concerned with the printer and plotter output. All output was put into a "bin" organised by internal telephone numbers. As many impatient users wanted to wait for their output, the I/O-counter became a "social meeting point".

One of the operators was on the watch for an opportunity to do ‘practical joke’ with one of the users. A large deck of waste punch cards was collected from the waste baskets near the punching machines. And yes, the user handed in his card deck and started a social talk. His program deck was read in and then sneaky put aside. The prepared deck was put into a punchcard bin and handed over with the words: "˙our program is running, these cards are not necessary anymore, isn't it?". At the same time, the bin was turned over and all cards scattered over the floor. The user saw in a short movie his tedious programming work of many months.....

In the same period, the computer industry had invented a subset op the 80-column punchcard. The standard cards had a tear off control strip. The small 51-column card was then used to process the data. One application was the Postgiro card. These short punch cards (51-columns) could be read by the card reader when a catch was flipped. Needless to say, reading 80-column cards with the 51-column catch out gave a spectacular crumpling of the 80-column cards at high speed! This phenomena gave also enough opportunities for a ‘practical joke’ using a prepared waste basket card deck.

The system software consisted of the Scope 3.4 operating system, compilers for FTN4 (Fortran), ALGOL 60, BASIC, COBOL, SIMSCRIPT, PERT/TIME, SIMULA, APEX and APT IV Automatic Programmed Tools (mechanical construction or MCAD). The Laboratory tried to develop a compiler for PROSIM, a simulation language being developed by the Technical University Delft (THD).

Architecture of the Control Data 6400 system

The Control Data 6400 comprised of a central processing unit (CPU) and ten peripheral processors (PP’s). The processors assisted the central processing unit with all activities that had to do with input and output, scheduling of tasked, presenting information to the operator on the system console (two round 15"-elektron valves), data transports to/from disk and magnetic tape units, and much more. Up to several hundreds of PP-programs (or overlays thereof) per second were executed in parallel. Thus, parallel processing is a very old expertise at TNO-FEL.

By many, the Control Data 6600-architectuur is regarded as the first Reduced Instruction Set Computer (RISC).

The system had 8 address registers (A-registers) that were directly coupled to 8* 60-bits data registers (X-registers). Loading of a new (address)value in one of five of the A-registers (A1-A5) caused a "load" from the main memory to the corresponding X-register (X1-X5). Setting a new addressvalue in the A6 or A7-register caused a "store" of the corresponding X-register value into the main memory. For indexing operations and simple integer calculation operations, the system had 8* B-registers. The B0-register was hardwired to zero. This was necessary in order to avoid problems with certain operations which required a no-indexing and thus a true zero value. The system had a very limited instruction set with only 60 instructions.

Every PP had its own memory (4096 words, 12 bits). The PP’s had a different, simple instruction set. Data could only be kept in the "accumulator" or the memory. PP's had no additional registers (although the PP-CPU had "internal stages" which can be considered as a kind of "registers" (P,Q,A). Interlocking of the parallel processinf by both the PP's and the CPU was done in a soft way. The system did not had a hardware interrupt as other architectures no (although later systems got an exchange jump). A PP-program signalled by means of a "completion bit" and waited in a loop until an "interlock bit" in central memory was cleared before proceeding. Looking deep into the architecture, the 6400 had only one single peripheral processing unit that processed all "PP’s" processing in a (barrel) cyclic "time-shared" fashion. This guaranteed that interlocking was a "unique" occurance. Thus, it was guaranteed that only one of multiple PP-programs trying to lock the same critical system resource could win.

Nevertheless, bad programming practices could lead to timing problems in which two or more PP-programs locked "interlocking channels" in the wrong sequence. Thus, depending on chances, once in a while the first PP requested first resource A and then tried to lock resource B at the same split moment that a second PP requested resource B and tried to lock resource A. This led to so-called deadlocks. Even worse, a PP could give up a resource it did not really have. For the system programmers the moment to take out three or four listings of three to four hunderd pages each and start the analysis. Experience allowed often for a quick solution to the problem in one or two hours. By the way, this software interlocking-technique was announced by DIGITAL in the 80's as something new (!). Spin-lock as they called it, provided symmetric multi-processing capabilities.

The system's main memory consisted of 65000 words by 60 bits wide core memory. There was no (too expensive) parity or SecDed-protection. (in current terms, the memory size was 480 KiloByte). To obtain a high transport speed, the memory was devided into 8 "banks". By using only uppercase characters and a limited set of special characters (scientific computations did not require that), characters used "bytes" of only 6 bits (10 per memory word). This was one of the root causes of the year 2000 (Y2K) or millennium problem.

In 1977, expansion of the system memory was required. This required an additional system bay which was connected after three days of wire-wrapping. The expansion of the system to 98 Kwords resulted in a doubling of the memory size available to the users for their batch jobs.

The chance of incorrect computation results due to lack of memory parity was less than those by defects in the CPU or in the system software. A weekly preventive maintenance period of three hours was required. Days with five or six system crashes were not uncommon at that time. Special programs were developed to analyse a "crash" dumptape for "bit drops". A smart comparison was made of the system relatively fixed portion of the memory of a fresh system with the crash dump. As one could compute the dropped (or set) line of bits through the absolute memory, the bit and memory bank could be computed. This saved a lot of time in figuring out what to repair. To guarantee the correct working of the CPU, a set of batch jobs was continously present in the background. Idle time was used to verify the correct working order of all CPU instructions and registers.

Disks

In 1974, the Laboratory had two CDC 844-21 disk units on which replacable disk packs could be mounted. Each disk pack had the enormous capacity of 712 million bits bruto or 100 MByte. The majority of the disk space was occupied by the operating system, compilers and libraries. The users were not supposed to have many permanent disk space in use. The other technical disk specifications were: thirty to 165 ms positioning time and a transfer speed of 6.8 Mbits/sec or +/- 1 MByte/sec.

Nearly each year, an additional disk unit was installed to keep up with the requirements of the users. Starting in 1975, it became possible to use "double density" disks on a new type of unit (CDC 844-41). This provided 200 MB per disk pack. User files that were not used for a month were automatically removed by 'Operations'.
So-called "attach"-jobs were prohibited, but inventive users masked their existance as Fortran-programs and so on. Operations developed as "counter-measure" special programs to detect the use of attach-jobs.

Once per year or so, a ball bearing of one of the disk units reached the end of its life-time. When detected early enough, preventive maintenance could replace the ball bearing and no bad luck took place. However, when one was late a grinding noise could be heard followed by a brown oxide dust coming out of one of the units when the disk head plowed into the disk pack. Then, a tedious cleaning, repair and restore process had to executed.

As recovering of all data in the system was a lengthy 5-6 hour process, every new disk unit and every new disk pack was tested using a full day acceptance test. As the system CPU did not have to wait for the completion of an I/O-activity, more PP’s could issue disk writing and disk reading activities. A special written acceptance test program started writing half of the disk, read it sequentially anf started random reading. The ultimate test was to let the disk head jump between the first and the last "cylinder" on the disk pack. In this way, we once got the disk unit into its own frequency that it started "walking". The next morning, we found the disk unit trying to escape from its channel-cables!
The technicla engineers of Control Data The Netherlands were not always glad with the "TNO-tests"! However, "our" disk units turned out to have a far better mean-time between failure (MTBF) than those of other computer centers we knew.

Half an inch of wire

In the period Juli till October, 1980, the CDC 6400 console locked up one time to a couple of times a day. The result was a "black-out" or blank-screen. The system continued to work, but the operators could not "steer" anymore, could not handle requests and so on. An oscilloscope connected to the console output channel either resulted in a spontanious appearance of the screen information or in a total system crash. Sometimes other spurious hangs were experienced. Engineers were flown in from the US and from Switzerland. Almost every module was replaced systematically, the annouying problem did not disappear or move around. Systems programming tried to analyse the crashdumps. They were surprised by the hardware specialist, who detected a coding error by reading the octal dump as if it were a newspaper!

In the end, the earlier described disk acceptance program was used to stimulate input/output to/from a disk unit while the system clock frequency was changed slightly (changed margin settings). It turned out that the timing of some of the in- and output channels were skewed a little. Just cutting half an inch of wire of a clock signal eliminated the problem for ever.

This problem caused an enormous loss of CPU-hours. At the same time, the Physics Laboratory had two urgent projects to complete that required a lot of system capacity: simulations that were required to make clear whether or not the Netherlands Army required new tanks or could upgrade existing ones and simulations for the Royal Netherlands Navy that calculated design aspects of a new frigate.

A solution was found by Control Data Netherlands. The Laboratory could use during two weekend from 8 to 20 hours the CDC 6600 (machine number 1) at Control Data's office in Rijswijk. The Control Data 6600 was two to 2.5 times faster as the Control Data 6400. Problem was that they used the KRONOS-operating system that was incompatible with the NOS/BE operating system at the Physics Laboratory TNO. The solution was the creation of two disks packs with a mini-operating system. The smart use of a way to freeze the operating environment, resulted in a very fast (2 minutes) startup of the operating system in Rijswijk. The only manual actions were "Equipment Status Entries (EST)" changes and the pre-deadstart toggling of the deadstart switches to accomodate for different disk unit and channel numbers.

"Design of a computer: The Control Data 6600" (title in 6000 console display lettering!), J.E.Thornton; Scott, Foresman and Company, 1970; Library of Congres Catalog No. 74-96462



Museum Homepage