As we were highly interested in advances in compiling techniques and in mini-supercomputing, especially in vector and parallelprocessing, we included in the benchmark set a mathematical kernel which was designed and developed by Prof. van der Vorst at the Technical University Delft. This kernel measured the Mflops speed of the system for various and increasing vector lengths. At the same time the so-called n½, the vector length at which the half peak rate is reached, was calculated. The kernel measured these values for over fifty different mathematical calculations, consisting of a mixture of vector-vector and vector-scalar operations.
This kernel was also compiled and run using the new and - according to CDC - much faster FTN2/VE compiler. This compiler was particularly designed for the CYBER 990, a system with special vector-based hardware. It turned out that the overall throughput speed of the kernel job on the CYBER 840A was 30% better than the time to complete when the "old" FTN/VE compiler was used. Further analysis, however, surprisingly showed a number of remarkable differences for a number of basic mathematical functions on this scalar system. Determining of the maximum value of an array (vector) for instance required only five Compass (assembler) instructions using the old compiler. The new compiler generated about thirty instructions. As the CPU stack (cache) could not hold that many instructions and not all functional units could be fired at once, it became obvious why this kernel section was factors slower than formerly. Based on the detailed analysis of this kernel we showed to Control Data, that another 10% speed increase was feasible. The suggested improvements were incorporated in a latter version of the compiler.
The preparation of our own benchmark jobs required much more effort than expected. Many of the CPU intensive programs turned out to run without ever being compiled with the "optimize" compiler option on (FTN4 OPT=2; FTN5 OPT=HIGH). The users explained that their program would not work in optimised mode! It required much sweat to remove all bad programming habits. First of all, we used the FTN/VE-compiler with opt=high to compile the program. When the program resulted in the expected result, we used the new FTN2/VE-compiler to improve the possibilities of the source code for vectorisation. In order to remove possible dependencies on the CDC Fortran dialect, the program was compiled and run as well on the DEC VAX 8350. All program improvements were communicated to the owners of the program. As a side-effect, a speed improvement of up to 30% was reached already on the current system!
Then we were able to run the benchmark set of programs on the then national supercomputer CYBER 205, on the CDC CYBER 995 (NOS/VE system with vector processing facility) at SARA in Amsterdam, on a ETA-10P system in Minneapolis and on an Alliant FX/40-system with two CPU’s of the DGV-TNO (nowadays TNO-NITG).
In October 1988, after reviewing all benchmark results as well as after regarding the technical and financial aspects of proposals by various vendors, the FEL decided to lease an ETA-10P mini-supercomputer.
The ETA-10P, code name "Piper"
was designed by Neil Lincoln under the supervision of J.E. Thornton - the
detail designer of the CDC 6600- being the
technical director of ETA. The ETA-10P CPU board was built using CPU chips that
were working but were too slow for being used in the Liquid Nitrogen (LN2) cooled
ETA-10E processors. The Piper product was introduced
when Carl Ledbetter
came from IBM. It was a marketing decision to use all chips, and build a low cost supercomputer entry level machine. Even third
world countries could purchase a Piper but had certain powerful hardware instructions disabled (scatter-gather) that could
be used to simulate and design nuclear reactions (slowed speed of large 10K*10K determinant and matrix calculations).
The CPU chip had 284"pins". Only under a microscope one could see whether the
automatic suddering of the chips had went well. The ETA-10P processor was placed
on an air cooled 44 layer board with 20.000 microscopic drill holes (1.5 mil).
Each main CPU-board had a shared memory of 4 Mwords by 64 bits. Apart from this
shared memory, a common memory of 8 Mwords by 64 bits. The ETA-10P was clocked
at 24 nanoseconds. Using "pipelining", the ETA-10P CPU had a theoretical peak
speed of 146 Mflops.
As operator console, the system used an Apollo/Domain Unix-workstation. The console was coupled via a local area to the ETA10-P system. The programs on the console system monitored the correct behaviour of the ETA-10P system.
On January 4, 1989, two oof the system programmers of TNO-FEL flew to Minneapolis to be trained to operate and maintain the ETA Unix System V-operating system. The Laboratory would get the third ETA-system in the world running under the Unix System V. All other ETA-systems in the field ran under ETA Operating System EOS, an ETA implementation of the CYBER 20x-operating system VSOS.
During the 2.5 week of courses, the average outside temperature was -20 C (-4 F). at a certain moment, the temperature approached the -40 C (-40 F), a very special experience as the extreme dryness caused unexpected electro-static discharges when some metal was touched.
The inside of the ETA-10P.
Memory blocks on top;
CPU-board in the middle.
In the ETA factory, located adjacent to a very busy railway interconnection point with many parallel tracks, we were allowed to see the way how CPU-boards were being built. The earlier mentioned high-precision drill holes could only be drilled when no train moved on one of the railway tracks. Seismatic wave tranceivers recorded the train movements. When the vibrations exceeded certain limits, the electronic signals reached the factory much faster than the slow waves through the underground. This left enough time to abort the drilling process.
On April 17, 1989, 8 hours Minneapolis-time, the Board of Directors of Control
Data announced in a press release their decision to stop all developments
at ETA Systems. Their decision was caused by problems with the financial situation
of Control Data as a whole. As ETA Systems would remain in the red figures for
some foreseeable future, the operation was stopped. A personal story by one
of the former managers on the ending of ETA Systems can be found on the web.
Also a "ETA
Yearbook" is available with a collection of personal reminiscences
of former ETA staff, compiled five years after the closure of the company (255K
PostScript file), and is also more personal in content. (mirror).
At TNO-FEL, it was decided to immediate stop all ETA-10P acceptance activities. After a couple of weeks negotiations, it was decided to break the contract and to start looking for a replacement mini-supercomputer that would fit in the lease-contract.