File input en output in Scope 3.4 and NOS/BE
* vooralsnog wordt geen vertaling in het Nederlands voorzien
In SCOPE 3.4 and NOS/BE systems, nearly all input and output from user programs was done through local files. A local file could reside on a disk drive, a tape drive, or an interactive terminal.
Local filenames were 1-7 alphanumeric characters long; the first character had to be alphabetic. Famed CDC systems programmer G. R. Mansfield wrote a brilliant segment of twelve CPU instructions that tested a 60-bit word to see whether it contained a valid local filename. Operating system convention required local filenames to be left-justified and zero-filled. Mansfield’s code depended heavily upon the 6 bit byte Display Code character set.
Local filenames were unique only within a job; there could be many jobs with identically-named files that were completely unrelated. In fact, it was difficult for jobs to share files; see Permanent Files.
Permanent files could only be accessed through the use of local files. Either one had to associate an up to seven character local file name with a permanent file name (pfn), or the first seven characters of the permanent file became the local file name or lfn.
User programs maintained data structures named File Environment Tables (FETs) through which they issued I/O requests to the Scope or NOS/BE operating systems.
Most I/O was, of course, to disk files. This included I/O from a card reader or to a printer. In SCOPE or its successor, NOS/BE, only the Operating System could actually do I/O to these devices. When a card deck was read at the card reader, the Operating System created a disk file containing the contents of the cards. The operating system program responsible for controlling and driving the I/O to printers/plotters and from card readers was the JANUS program: the PPs 1IR and 1IQ.
At execution of a batch job, the set of job instructions and subsequent data sets were made available in a local file named INPUT. Special cards signalled the End-of-Record (level) and End-of-File status.
Printer output was default written to a file named OUTPUT which was sent to the print queue at job termination. Similarly, the file PUNCH (not used at TNO), if it existed, was sent to a card punch queue as these ‘pre-defined’ files had a special “disposition” and were always placed on a disc with a queue property.
Other files could be a new batch job, printed, plotted or previewed by giving them the correct “disposition”, usually via the DISPOSE or ROUTE control statements. However, these local files had to be assigned beforehand to a queue device:
This unless all disk packs for user files had the Q-property. Then the user had not to be bothered with pre-assigning the disk selection.
A local file could be created and associated with a reel of tape on a tape drive via the REQUEST statement. (REQUEST,lfn,NT6250,VSN=tapeno,RING.)
Files could be associated with the user’s terminal via the CONNECT statement. In that way, the input had to be put in by the user at the terminal or was written onto the terminal printer or the terminal screen. There were different ways of “connecting” a file, depending upon the character set (Display Code or 63/64 character set (connect mode 0) vs. ASCII-95 character set (connect mode 1) or byte output (connect mode 2).
Normally, only brand-new files were connected, but a trivial anomoly of the implementation was that an existing permanent disk file could be connected. In that case, the contents of the disk file would be unavailable until the file was disconnected.
All file I/O was accomplished through a CIO (Circular I/O) Peripheral Program request. CIO used circular buffers in which data transfers could wrap from the last word of a buffer to the first word. The user job and the OS together kept track of buffer information through four 18-bit fields in the File Environment Table in the user’s field length:
|FIRST||pointed to the first address of the file’s buffer (in the user’s address space, a.k.a. field length (FL)).|
|LIMIT||was the last word address + 1 of the buffer. The length of the buffer was not stated explicitly, as it was slightly more efficient to check for the end of the buffer by comparing a pointer to the contents of LIMIT.|
|IN||was the address in the buffer of the next address location into which data would be placed. In the case of an input request, this would be the operating system placing data into the buffer from a file being read from disk, tape or terminal. In the case of output, this would be the next place that the user program would put data to be written to a file.|
|OUT||was the opposite of IN: the address in the buffer of the next valid location in the buffer containing data to be processed. In the case of an input request, this would be the user job retrieving data recently placed there by the OS. In the case of an output request, this would be the OS removing data from the buffer in order to write it to a file.|
If IN == OUT the buffer was empty. As a result, the effective size of the buffer was one word less than the number of words in the buffer. Believe it or not, this bothered me: memory was tight in those days!
The use of circular I/O allowed a job to issue an I/O request before it had completely finished processing the previous request. It also allowed a single I/O request to transfer more data than the size of the buffer. This was possible because the user job, for instance, could be processing data and updating the OUT pointer while the operating system was placing data into the buffer from a file. As long as neither side caught up to the other, a single I/O request could go on and on for many buffers worth of data. This so-called ‘pointer-chasing’ made full use of the CDC systems architecture. Since I/O was performed by PP programs and user jobs executed in the CPU, it was in fact quite feasible for more than one buffer’s worth of data to be transferred in a single request.
(with special thanks to Mark Riordan who provided the basis for this page)