Computer history: The Control Data CDC 6400


The Control Data CDC 6400: “The punchcard period”

A Control Data 6400 system was installed in May 1974 in the computer room of the Physics Laboratory RVO-TNO. The Control Data 6600 system was developed in 1964 as a “supercomputer” by Seymour R. Cray (architect) and James E. Thornton (detailed design). The Control Data 6400 was introduced by Control Data in 1965 as a simpler and 2.5 times slower version of the 6600. The Control Data 6400 was leased for a second life period to the Physics Laboratorium RVO-TNO. In December 1974, a Control Data System 17 minicomputer with a CDC 6400-channel interface was added to the configuration.

The Control Data CDC 6400 (64 KWords). The console can be seen in the foreground. At the right, two CDC 844-21 disk units and in the back CDC 659-magnetic tape units.

The Control Data CDC 6400 (64 KWords). The console can be seen in the foreground. At the right, are two CDC 844-21 disk units and in the back the CDC 659 magnetic tape units.

The system was – at that time – very fast. The configuration comprised two tape units, one card reader – able to read 600 cards per minute – and one chain printer (thinking about it, I still remember the dancing characters of a print line). The architecture of the Control Data 6000 series computers was an interesting model and test case for studies of the newly invented Petri nets as well as computation optimisation techniques.

Practical jokes using punch cards

The users of the system had to hand in their 2000 card storage bins to the card reader operator at the I/O counter. (see also: Everything about punchcards). The operator took the deck of cards, tamped the deck neatly, put it into the card readers’ input hopper, started the reader and later removed the cards from the output hopper. Then, the card deck or the bin was returned. The operator was also concerned with the printer and plotter output. All printed output was put into a ‘bin’ organised by internal telephone numbers. As many impatient users anxiously wanted their computer output, the I/O counter became a social meeting point.

One of the operators was on the watch for an opportunity to perform a ‘practical joke’ with one of the users. Earlier, a large deck of waste punch cards was collected from the wastebaskets near the punching machines. And yes, the specific user handed in his card deck and started a social talk. His program deck was read in and then sneakily put aside. The prepared deck of punchcards with alike cross-markings on the topside was put into a punchcard bin and handed over with the words: “Your program is running, these cards are not necessary anymore, aren’t they?”. At the same time, the bin was turned over and all cards were scattered over the floor. The user saw in a short movie his tedious programming work of many months passed by …

In the same period, the computer industry invented a special version of the 80-column punchcard. The standard cards had a tear-off control strip. The remaining small 51-column card was then used to process the data (elderly people may remember the Postgiro cards). These short punch cards could be read by the same card reader when a catch was flipped. Needless to say, reading 80-column cards with the 51-column catch-out gave a spectacular crumpling of the 80-column cards at high speed! These phenomena gave also enough opportunities for ‘practical jokes’ using a prepared waste card deck.

The system software consisted of the Scope 3.4 operating system, compilers for FTN4 (Fortran), ALGOL 60, BASIC, COBOL, SIMSCRIPT, PERT/TIME, SIMULA, APEX and APT IV Automatic Programmed Tools (mechanical construction or MCAD). The Laboratory tried to develop a compiler for PROSIM, a simulation language being developed by the Technical University Delft (THD). Moreover, an assembler for Data General NOVA minicomputers, 16-bit machines with 32k words, ran on the Control Data 6400.

The architecture of the Control Data 6400 system

The Control Data 6400 comprised a central processing unit (CPU) and ten peripheral processors (PPs). The processors assisted the central processing unit with all activities that had to do with input and output, scheduling of tasks, presenting information to the operator on the system console (two round 15″-electron valves), data transport to/from disk and magnetic tape units, and much more. Up to several hundreds of PP programs (or overlays thereof) per second were executed in parallel. Therefore, parallel processing is a very old expertise at TNO-FEL. By many, the Control Data 6600 architecture is regarded as the first Reduced Instruction Set Computer (RISC).

The system had 8 address registers (A-registers) that were directly coupled to 8* 60-bit data registers (X-registers). Loading of a new (address)value in one of five of the A-registers (A1-A5) caused a ‘load’ from the main memory to the corresponding X-register (X1-X5). Setting a new address value in the A6 or A7 register caused the storing of the corresponding X-register value in the main memory. For indexing operations and simple integer calculation operations, the system had 8* B-registers. The B0 register was hardwired to zero. This was necessary to avoid problems with certain operations which required a no-indexing and thus a true zero value. The system had a very limited instruction set with only 60 instructions.

Every PP had its own memory (4096 words, 12 bits). The PPs had a different, simple instruction set. Data could only be kept in the ‘accumulator’ or the memory. PPs had no additional registers (although the PP-CPU had internal stages which can be considered as a kind of registers (P, Q, A). The interlocking of the parallel processing by both the PPs and the CPU was done using ‘soft interrupts’. The system did not have a hardware interrupt as other architectures had (although later systems got an exchange jump). A PP program signalled the completion of a request using a completion bit and waited in a loop until an ‘interlock bit’ in central memory was cleared before proceeding. Looking deep into the architecture, the Control Data 6400 had only one single peripheral processing unit that processed all “PPs” processing in a (barrel) cyclic time-shared fashion. This guaranteed that interlocking was a unique occurrence. In this way, it was guaranteed that only one of multiple PP programs trying to lock the same critical system resource could win.

Nevertheless, bad programming practices could lead to timing problems in which two or more PP programs lock two ‘interlocking channels’ in the wrong sequence. Thus, depending on chances, once in a while, the first PP requested first resource A and then tried to lock resource B at the same split second that a second PP requested resource B and tried to lock resource A. This led to so-called deadlocks. Even worse, a PP could give up a resource it did not possess. For the system programmers the moment to take out three or four PP program listings of three to four hundred pages each and start the analysis. Experiences allowed often for a quick solution to the problem within one or two hours. By the way, this software interlocking technique was announced by DIGITAL in the 1980s as something completely new! Spin-lock as they called it, provided symmetric multi-processing capabilities.

The system’s main memory consisted of 65,000 words by 60-bit wide core memory. There was no (too expensive) parity or SecDed-protection (in current terms, the memory size was 480 KByte). To obtain a high transport speed, the memory was divided into eight ‘banks’. By using only uppercase characters and a limited set of special characters (scientific computations did not require those), characters used ‘bytes’ of only 6 bits (10 per memory word). The related condensed type of programming was one of the root causes of the year 2000 (Y2K) or millennium problem.

In 1977, expansion of the system memory was required. This required an additional, third system bay which was connected after three days of wire-wrapping. The expansion of the system to 98 Kwords resulted in a doubling of the memory size available to the users for their batch jobs.

The chance of incorrect computational results due to lack of memory parity protection was less than that by defects in the CPU or the system software. A weekly preventive maintenance period of three hours for the system was required. Days with five or six system crashes were not uncommon at that time. Special programs were developed to analyse a ‘crash’ dump tape for ‘bit drops’. A smart comparison was made of the system’s relatively fixed portion of the memory of a fresh, correctly operating system with the same memory part of the crash dump. Visual analysis of the comparison allowed a quick recognition of bit drops in each subsequent 8th word. Then one could compute the dropped (or set) line of bits through the absolute memory, resulting in the address of the failing memory block or memory driver logic. This saved a lot of time in figuring out what to repair, especially when the error was intermittent. To guarantee the correct working of the CPU, a set of batch jobs regularly performing a small set of computations was continuously present in the background. In case of a failure, the job hangs flashing a message to the operator. 


In 1974, the Laboratory had two CDC 844-21 disk units on which replaceable disk packs could be mounted. Each disk pack had an “enormous” capacity of 712 million bits gross or 100 MByte. The majority of the disk space was occupied by the operating system, compilers and libraries. The users were not supposed to have much permanent disk space in use. The other technical disk specifications were: thirty to 165 ms positioning time and a transfer speed of 6.8 Mbits/sec or +/- 1 MByte/sec.

Almost every year, an additional disk unit was installed to keep up with the user storage requirements. Starting in 1975, it became possible to use “double density” disks on a new type of unit (CDC 844-41). This provided 200 MB per disk pack. User files that were not used for a month were automatically removed by ‘Operations’. So-called “attach”-jobs were prohibited, but inventive users masked their existence as Fortran programs and so on. Operations developed countermeasures such as special programs to detect the use of such attach-jobs.

Once per year or so, a ball bearing of one of the disk units reaches the end of its lifetime. When detected early enough, preventive maintenance could replace the ball bearing and no bad luck took place. However, when one was late, a grinding noise could be heard followed by a cloud of brown oxide dust coming out of one of the disk units when the disk head ploughed into the disk pack. Then, a tedious cleaning, repair and restore process had to be executed.

As recovering all data in the system was a lengthy five to six-hour process, every new disk unit and every new disk pack was tested using a full-day acceptance test. As the system CPU did not have to wait for the completion of an I/O activity, more PPs could issue disk writing and disk reading activities. A specially written acceptance test program started writing half of the disk, read it sequentially and started random reading. The ultimate test was to let the disk head jump between the first and the last cylinder of the disk pack. In this way, we once got the disk unit into its own frequency that it started ‘walking’. The next morning, we found the disk unit trying to escape from its channel cables at the other end of the computer room! The technical engineers of Control Data The Netherlands were not always glad about the “TNO-tests”! However, our disk units turned out to have a far better mean-time between failure (MTBF) than those of other computer centres we knew.

Half an inch of wire

In the period Juli to October 1980, the CDC 6400 console was regularly locked up, sometimes after a couple of days but also a couple of times a day. The result was a black-out or blank screen. The system continued to work, but the operators could not ‘steer’ anymore, could not handle tape requests and so on. An oscilloscope connected to the console output channel either resulted in a spontaneous appearance of the screen information or in a total system crash. Sometimes other spurious hangs were experienced. Engineers were flown in from the US and Switzerland. Almost every module was replaced systematically, the annoying problem did not disappear or move around. Systems programming tried to analyse the crash dumps. They were surprised by the hardware specialist, who detected a coding error by reading the octal dump as if it were a newspaper!
In the end, the earlier described disk acceptance program (fast parallel read) was used to stimulate input/output to/from a disk unit while the system clock frequency was changed slightly (changed margin settings). It turned out that the timing of some of the input and output channels was skewed a little. Just cutting half an inch of wire off a clock signal eliminated the problem forever.
This problem caused an enormous loss of CPU hours. At the same time, the Physics Laboratory had two urgent projects to complete that required a lot of system capacity: simulations that were required to make clear whether or not the Netherlands Army required a new type of tank or could upgrade existing ones, and simulations for the Royal Netherlands Navy that calculated a new frigate design.

A solution was found by Control Data Netherlands. The Laboratory could use during two weekends from 8 to 20 hours the CDC 6600 (machine number 1) at Control Data’s office in Rijswijk. The Control Data 6600 was two to 2.5 times faster than the Control Data 6400. The problem was that they used the KRONOS operating system which was incompatible with the NOS/BE operating system at the Physics Laboratory TNO. The solution was the creation of two disk packs with a mini-operating system. The smart use of a new operating system feature to freeze the operating environment resulted in a very fast (less than 2 minutes) start-up of the operating system in Rijswijk. The only manual actions were “Equipment Status Entries (EST)” changes and the pre-deadstart toggling of the deadstart switches to accommodate for different disk units and channel numbers.

Anecdote: miniskirt perils

In 1978 there was a new printer that was faster, produced fewer ‘dancing’ letters and made less noise. Also, the new 580 printer had an automatic ‘table’ where the output was neatly folded. As usual, the scale-sized floor plan was adjusted until the right location for each piece of equipment was determined. Holes were cut in the tiles of the raised computer floor and the electrical and signal cables were prepared. However, something had been overlooked: the door of the new printer turned in the opposite direction to the old printer.
Because she found computers extremely interesting, one of the secretaries regularly worked as a console operator. Now she also loved hip miniskirts. The, at that time, mainly male Laboratory population, including many military detachments, loved those miniskirts as well. However, removing the output from the printer was ergonomically a less successful type of work for someone with a miniskirt. To curb the rapidly growing social event at the desk, the decision was made to turn the printer 90 degrees. The view for those behind the console improved a lot.

Beide kettingprinters
Both printers and the card reader in front

System software

The Scope 3.4 operating system, later renamed Network Operating System/Batch Environment or NOS/BE, was delivered together with the translators in the form of source code. For the System Programming department at the TNO Physics Laboratory (PhL), it was a kind of sport to have new versions (“levels”) for the operating system as the first computer centre in the world in production. In addition to the advantage of the new possibilities that the users were often faced with, there was the disadvantage that system errors were often only found in ‘the field’ in an operational environment. The versatility of problems and system errors that required an often fast solution meant that a lot of experience was gained to quickly analyse an error, to hypothesise, to generate a solution, test and fix it during the ‘happy hour’ for system programming (system time of 17.30 to 18.30).

The Laboratory, like all other computer centres in the Netherlands, had opted for the so-called 63-character set. Control Data only tested systems with the 64-character set in America. Unsatisfactory code or code from “new” programmers yielded one or more errors with almost every new release, which we corrected at the TNO Physics Laboratory and made public with a lot of misgivings through the Problem Reporting System mechanism (PSR). Every two weeks a set of microfiches with all complaints and solutions collected worldwide was sent to all computer centres by Control Data. At every release level, it was exciting whether the errors we found were the first to report or that our colleague from the University of Arizona was going to blame …


Software problems reported by the Laboratory
(site code PLTN)



With code (solved)



With code (solved)

















































In addition to corrections for system errors, a lot of additional code was developed. A lot of code was also written to ease the operator’s work on the operator console. Where the standard NOS/BE operating system required the interplay of two or three screens to type in a command, we displayed all the necessary information in a condensed form on a single screen. In several operator commands, the typing of the complete 7-character long job name was either replaced by typing just the two or three-digit ordinal of the job or by completing the remainder of the job name. In this respect, the Laboratory was far ahead of what would later be called the ‘ergonomic workplace’.

Anecdote: The director received an offending letter

The Programming section of the Computer Group wrote and maintained various programs for the Armed Forces. For exercises from the communication centres of the Royal Netherlands Army, a program was written to deliver random trigrams which had to meet several specific requirements. To keep the work of the conscripts who had to make the connections interesting, the ‘random’ parameters were varied in such a way that the largest possible number of three-letter offending words appeared in the lists supplied to the Army client. The conscripted soldiers valued this highly until a general arrived to inspect the exercise in the communication bunker. “New trigram?” “K..T” … The general: “On report!” “But General, this list shows ‘K..T‘ as the next trigram as you can see.” The Director of the Physical Laboratory received a letter from the General with thanks for the timely delivery of the random trigrams and the assignment to create future lists with trigrams in a less random way. In the letter, the Army client named a set of the Dutch offensive three-letter words as trigrams that had to be filtered out in the future. This is probably the only specification ever submitted to TNO in which TNO’s customer specified a set of offending three-letter words! Incidentally, it was a sport for the programmer to find other not-listed ‘offensive’ trigrams for his customers, the conscripts!


“Design of a computer: The Control Data 6600” (title in 6000 console display letters!), J.E.Thornton; Scott, Foresman and Company, 1970; Library of Congres Catalog No. 74-96462