Remote Sensing: PHARS and PHARUS (1990 – 1999)
Summary
The PHARS and PHARUS systems (1990–1999) were developed for advanced remote sensing, focusing on radar-based imaging and ocean surveillance. PHARS was designed for air surveillance using synthetic aperture radar, while PHARUS enhanced land-based and maritime surveillance. These systems provided detailed environmental monitoring for military and scientific purposes, advancing detection and imaging capabilities.
PHased ARray Universal Sar (PHARUS): a polarimetric C-band airborne SAR
PHARUS (PHased ARray Universal Sar) was a full polarimetric C-band (5.3 GHz) aircraft Synthetic Aperture Radar (SAR) that was used to image the earth’s surface. It was designed and built by TNO-FEL in The Hague, NLR in Amsterdam and the Delft University of Technology under program management of the Netherlands Agency for Aerospace Programs (NIVR) in Delft. TNO-FEL was the main contractor and was responsible for project management. Financial support for the project was provided by the Ministry of Defence and by the Netherlands Remote Sensing Board (BCRS).
SAR systems are distinguished by their high azimuth resolution capability, achieved through signal processing of the Doppler shifts generated by the forward motion of the radar and aircraft. The azimuth resolution in such a system is theoretically independent of the operating distance with the highest obtainable resolution in the order of several meters. SAR systems like PHARUS can generate radar images day and night and in all weather conditions. More on the Principle of SAR and polarimetry can be read at the bottom of this page.
The PHARUS system was divided into three subsystems:
- the radar in the pod outside the aircraft,
- the onboard data processing and recording inside the aircraft,
- the ground-based SAR processing.
The PHARUS system had a modular architecture, enabling easy adaptation to specific requirements and a user-oriented configuration. The use of a modular phased array enabled a fixed mounting of the radar to the aircraft and avoids gimbaling systems making this SAR concept also suited for small aircraft, which considerably reduced operating costs. The system was capable of operating under turbulent conditions. PHARUS was fully programmable, featuring single and multi-polarisation modes and the selection of resolution and range. Even pulse-to-pulse beam steering was supported, enabling advanced features like spotlight mode, active nulling and multi-target tracking.
Some key features of the PHARUS system were:
- Modern solid-state radar technology
- Modular system architecture
- A modular active phased array antenna
- Programmable radar characteristics
- Programmable recording and data-processing
- Internal calibration
- Supports satellite-simulating modes (ASAR)
Key specifications of the PHARUS system were:
- Frequency: 5.3 GHz (C-band)
- Transmit power: 2OW/module
- Resolution: 3.75 m in range, up to 1 m azimuth
- Range up to 30 km
- Swath width up to 20 km
Before the construction of PHARUS started, experience in the field of SAR had been gained through the development of a prototype system with limited capabilities, called PHARS. This small but powerful system was successfully tested in November 1990 and provided good SAR imagery. PHARS also successfully participated in the ERS-l CAL/VAL campaign in Norway.
Flights with PHARS and PHARUS
(The table below was captured from the year 2000 Pharus project’s webpage and extended with the later flights;
most images can be enlarged by clicking on them)
Background: the principle of SAR
PHARUS is a side-looking imaging radar on a moving platform (aircraft, satellite, etc.). The characteristic feature of Synthetic Aperture Radar (SAR) is the high resolution in the direction of motion, obtained by aperture synthesis. The result is an image, consisting of pixels, resembling an aerial photograph. SAR is in the category of coherent pulse radars, i.e., it transmits pulses (as opposed to a continuous wave), and measures both amplitude and phase of the received echo signal.
The radar illuminates with its antenna beam a patch on the ground, to the side of the platform. By the motion of the platform, an illuminated continuous strip is formed, called the swath, see Image. After processing, the strip is resolved into resolution cells, one of which is depicted in the image.
After SAR processing, a SAR image consists of an array of pixels, where each pixel value is a measure of the radar reflectivity of the corresponding area, i.e., a resolution cell, on the ground. The image is therefore basically a reflectivity map. The measured value in each pixel is commonly referred to as the backscattering coefficient. For display purposes, it is common practice to display this map using a black-and-white intensity coding: dark for low backscatter, and bright for high backscatter. This greyscale map constitutes the ‘image’.
To achieve high resolution in the range direction, a short pulse is required. Instead of transmitting a very short pulse with very high peak power, a long time-coded pulse with lower peak power, but equal energy is transmitted. The modulation allows compression of the received pulse, thus gathering the total pulse energy into a short pulse. This process is referred to as pulse compression or range compression. The most widely used form of coding is linear frequency modulation (chirp).
To achieve high resolution in the cross-range, or azimuth direction, a very narrow antenna beam would be needed, requiring a very large antenna aperture. The principle of SAR is to extend the small physical antenna aperture to a many times larger ‘synthetic aperture’ by coherent integration of echoes received over a certain distance travelled by the moving platform. In the case of PHARUS, for instance, the real antenna is 1 meter long, while the synthetic aperture may be several hundred metres long.
Coherent integration is mathematically analogous to pulse compression and is called azimuth compression. This equivalence can be understood by considering that the frequency modulation in the transmitted pulse is similar to the Doppler frequency modulation induced by the motion of the platform. Hence, the Doppler modulation that exists in a series of received pulses, due to motion, is used in a way similar to the frequency modulation within a pulse, which is intentionally generated by the radar.
A characteristic feature of SAR is that azimuth resolution is independent of range. In radars that do not employ the synthetic aperture principle (therefore sometimes called real aperture radars), the cross-range resolution is determined by the antenna beam width and is, therefore, an angular resolution. The resulting geometric resolution gets worse as the distance increases. In SAR, the larger antenna footprint at a longer range allows longer observation of an object (longer synthetic aperture), so that the resulting geometric resolution remains the same in the end. In practice, the range is limited by the amount of transmitting power available. Another basic property of coherent imaging radars, such as SAR, is the phenomenon of ‘speckle’. This is a type of noisiness that can be reduced by an averaging technique called multi-looking.
Principle of polarimetry
Early SAR systems used one single polarisation antenna for transmitting pulses and receiving their echoes. These were therefore called non-polarimetric systems. For instance, if the antenna was linearly horizontally polarised, the system was an HH polarised system, that is, it used horizontal polarisation for both transmission and reception. Analyses of the SAR images always left questions unanswered like, what would the image have been if another system had been used, for example, a VV, an HV, or a differently polarised system? Is the polarization used optimally for the application? These questions are answered completely by the use of polarimetric systems. The subject of polarimetry is the interpretation of polarimetric data.
The basic use of a polarimetric image is the synthesis of images with arbitrary transmit and receive polarisations. From the scattering matrix map, images can be created representing arbitrary transmit and receive polarisations, even arbitrary elliptical ones. The following advantages of polarimetry have been demonstrated:
- the contrast between targets and background can be maximised by choosing the correct transmit and receive polarisations,
- the accuracy of crop type and land-use classification results increases,
- the estimation accuracy of soil and vegetation parameters (like forest biomass) increases.
In a non-polarimetric SAR image, the reflectivity of a single resolution cell is measured as a single number, the backscattering coefficient (usually HH or VV), which can be displayed using intensity coding (black and white). In a fully polarimetric SAR image, such as generated by PHARUS, four polarisation combinations of the backscatter coefficients are displayed, e.g. by using both intensity and colour coding. Furthermore, using these four polarisation channels, any other polarisation can be generated, e.g. for reasons of calibration or contrast optimisation.
The polarimetric generalisation of the backscattering coefficient is called the scattering matrix S. The matrix consists of four complex numbers, representing the complex backscattering coefficients for all four polarisation combinations, HH, HV, VH and VV.
PHARUS is capable of measuring the full scattering matrix rather than the backscatter coefficient for one polarisation setting only. It is measured as follows. The PHARUS polarimetric SAR in full polarisation mode uses a single-phased array antenna which can be electronically switched between horizontal and vertical polarisation. In full polarimetric mode, it first transmits a horizontally polarised pulse and then records the horizontally and vertically received echoes (both amplitude and phase) simultaneously, using two receive channels. The generated complex numbers correspond to Shh and Svh, respectively. It then repeats this step for a vertically polarised transmitted pulse; both polarisations are interleaved on Transmit. This completes the 2 x 2 matrix.
Since the scattering matrix contains many independent variables, there are many ways in which a polarimetric image could be displayed. One way of doing this is to assign colours to the matrix elements, and thus create a colour image. However, it is not possible to convey all information contained in the scattering matrices in a single colour image.
The polarimetric analogue of multi-looking (for speckle reduction) is not performed by averaging scattering matrices, because the information would be lost by simply adding these complex matrices. An intermediate processing step is necessary: the conversion of the scattering matrices to 4´4 real symmetric Stokes matrices. These are subsequently averaged. The Stokes matrix consists of real numbers only but still contains the information of the complex scattering matrix, even redundantly. When Stokes matrices have been averaged, a transformation back to scattering matrices is generally not possible.