Computer history: Control Data Cyber systems (1986 – 1989)
The Control Data CYBER systems (1986 – 1989)
The Control Data CYBER 840A: installation and acceptance
In the spring of 1986, the laboratory negotiated the replacement of its CDC CYBER 170-835 with a CDC CYBER 840A. The installation of the CYBER 840A took place at the beginning of October 1986. The disassembly of the CYBER 170-835 started on Friday 3 October at 8.00 am. Before the power down, an additional full backup dump (DUMPF) of all files of the system was made. At 2.00 pm all the assembly of the CYBER 840A started. The weekend was used to power up the system and execute the Customer Engineer’s hardware tests. On Tuesday 7 October, the formal function tests were completed successfully. A month later, the system was formally accepted. A month later, the system was formally accepted after showing no problems with downtime and its mean time between failure (MTBF).



Cyber 840 I/O unit (IOU)A couple of days after the installation, the laboratory was visited by some technicians of the Royal Netherlands Navy. Their hardware requirements stated that signal cables had to be strapped at every 15 cm (6 inches). They were highly surprised to see a CPU with a ten cm thick package of signal and power coax cables hanging free in the CPU bay. These cables connected the various parts of the CPU. They could not understand that the system was transported by plane from Minneapolis, Minnesota and further on by trucks and worked flawlessly!
The average CPU utilisation of the CYBER 840A was over 55% in the years 1988 and 1989.

The total CYBER 840A-system configuration consisted of:
- The L-shaped CYBER 840A with 32 Mbyte memory, of which 4 MB was reserved for NOS/BE and the other 28 MB for NOS/VE. The I4 input/output unit (IOU) comprised 20 PPs, 24 fast I/O channels and a digital clock. Additionally, the IOU contained a TPM – two port multiplexer – which connected via two RS-232 lines to the console and – optionally – a dial-in modem for remote maintenance. The cooling system had a secondary water cooling circuit with a 24-ton water chiller; the primary circuit in the system bays was a freon-based cooling system. The chiller pumped 220 litres of freon per minute. It extracted 24.660 kW of heat (95% to the waterside and 5% to the air). The CPU chips used the 10K CMOS-chip technology that was for the first time applied in the earlier CYBER 205 system. The chips were pressed by clips against one of the small cooling pipes of the secondary water circuit.
- Four 885-1 disk units each with 1.2 Gbyte split over two disk spindles. These units were controlled each by two of the three disk controllers using the dual-channel control option. The ‘I/O load’ was split over the controllers and channels as best as possible.
- One 844-41 replaceable disk unit was used for system maintenance and test purposes only.
- A dual-channel 7165 disk controller with an 895-1 disk unit having 2.1 GByte of storage split over four disk spindles. This unit was used by the NOS/VE operating system. The transfer rate was 24 Mbps (3 MB/s).
- A 585 band printer which had a maximum print speed of 2,000 lines per minute and a paper skip speed of 2.5 m/s.
- Five CDCnet device interfaces:
- One mainframe device interface (MDI) for the interconnection of the CYBER with the Ethernet (FELLAN) and a Unit Record Interface (URI) to control the 585 printers. The user interface of CDCnet had a similar command structure as the NOS/VE command language. The interface was highly service-oriented. Some commands: %do help, %create_connection service_name=VAX, %display_connections, %change_working_connection connection_name=&a, %display_terminal_attributes, %change_terminal_attributes;
- A network device interface (NDI) that was used as a gateway with an XNS-stack (end of 1988, the XNS-stack was replaced by a fully compliant ISO/OSI TP4-stack) and a TCP/IP-stack;
- Two, later three, terminal device interfaces (TDIs) with initially 56 asynchronous ports up to 38.4 kbps and four (synchronous) HASP-ports up to 50 kbps.
- NOS/BE operating system environment.
- NOS/VE operating system environment: C, Fortran, Pascal, CYBIL, Programming Environment (PE), Accounting Analysis System; FCON (VAX Fortran conversion program), IMSL, Sciconic/VM, Abaqus (finite elements), PC-Connect, RMF/X-Modem.
The configuration graph of April 1988 depicts shows:

Under NOS/VE several job classes were defined, each having another service definition: e.g., the number of simultaneous jobs in execution, priorities, maximum CPU time, maximum time slice, and so on.

NOS/VE: migration path to the future
Apart from the user requirements for more capacity, there was another reason that drove the replacement of the CYBER 170-835 with a CYBER 840A. A working group stated earlier to the Director: ‘FEL had to move to a virtual memory-based operating system supporting the full ASCII character set‘. A smooth conversion trajectory had to be followed. The dual-state possibility, running two operating systems NOS/BE (Batch Environment) and NOS/VE (Virtual Environment) on a single hardware system gave the possibility to build further on earlier investments in disk units and communication equipment.
After the installation of the CYBER 840A, the users were pushed to move to the NOS/VE operating system. Each month, more services and dual-state priorities were moved towards the NOS/VE system and taken away from NOS/BE. The aim was to have free hands at the end of the CYBER 840A contract (end of 1989) to select another system vendor and another line of computer systems.
Magnetic tapes with chewing gum
The Computer Operations group developed a trend analysis program to determine whether there were magnetic tape errors specific to the magnetic tape itself or caused by one of the three magnetic tape units. This program read the data from the NOS/BE log file “CERFILE” which contained all logged hardware errors. From the trend information, it regularly became clear that the performance of read/write heads of a unit decreased. Based upon such trend statistics, we were able to signal the hardware technicians that a tape head needed adjustment or even required replacement soon.
At a given moment, the number of read/write errors increased very fast. Careful examination showed us that a ‘chewing gum’ like layer was deposited on the read/write heads in several units. Trend analysis showed that the problems correlated to units that processed tapes coming from a specific project/user. These tapes contained measurements made aboard one of the Royal Navy vessels. Certainty about the problem was obtained when the magnetic layer of one of the tapes came loose from the tape carrier; one could look through the tape. Cleaning of the magnetic tapes – which is running the tape over a sapphire and scraping dirt from it – did not resolve the problem.
After clearing a unit very carefully, experiments with unused magnetic tapes that had been on board the Navy vessel made it clear that those tapes caused the read/write errors. Other tapes from the same tape series, that had not been on board, did not show the sticky gum problems.
The tapes of those series magnetic tapes had a new smooth silicon finishing layer. As the summer had been very hot, it was feasible that the tape cartridges had been left in the full sunshine or car trunk causing exposure to high temperatures. This might have caused the disintegration of the silicon finishing layer. The manufacturer did not react to questions… and was subsequently removed from the list of Laboratory suppliers.
We started an experiment with a couple of magnetic tapes which were placed for a short period in full sunshine and a control set of tapes from the same batch of tapes which were kept cool in the central computer room. A short period later, we could prove that the sunshine heating had caused the sticky problems (a real ‘TNO research project’!).
In the meantime, the “chewing gum”-plague had spread as a sticky layer problem across the total magnetic tape set. Especially affected was the set of backup tapes, which could result in serious problems when a system recovery was needed if a system calamity would occur.
It was decided to copy the set of one hundred tapes to a new set of tapes from another manufacturer to have a clean copy. The set of dirty tapes was processed only on one (slow) tape unit to keep the other tape units clean. This unit was cleared a couple of times a day. Simultaneously with the copy action, the operational backup tapes were replaced.
A couple of months later, we could see from the trend analysis that we had overcome the sticky problem by taking all these rigorous measures. After half a year, the special strict procedures could be lifted.
The Control Data CYBER 930-11
On July 24, 1989, a CDC CYBER 930-11 was installed. The system had the following purposes:
- to support research projects that required Oracle facilities, especially for the Walrus submarine-related projects;
- to support pre- and postprocessing tasks under NOS/VE for the mini-supercomputer that was to be acquired soon by the laboratory;
- to support long-lasting NOS/VE-bound projects when the CYBER 840A was being de-installed after the installation of the mini-supercomputer.
The CDC CYBER 930-11 had two disks of 414 MB each and a (slow) 25/75 ips streaming tape unit. The CYBER 930 had compilers for Fortran, Pascal, and Ada. It ran an Oracle database engine and had a mathematical library (IMSL). To support the higher load of users when the CDC CYBER 840A was de-installed, several straps in the CPU of the CYBER 930-11 were changed and new microcode was loaded. The CPU became twice as fast.
This period of the enhanced CPU (technically it was a CDC 930-31) lasted half a year from 5/1990 to 12/1990. Additionally, another tape/disk cabinet was added with two disks of 414 MB each to support the remaining projects that previously ran on the CYBER 840A to convert them to another system or to wait until the project ended. This included projects of the ‘Wetenschappelijke Raad voor het Regeringsbeleid’ (The Netherlands Scientific Council for Government Policy (WRR)), a paying customer, and several projects with complex and large models that could not be converted without much effort.
