The fifth and final Hubble Space Telescope Service Mission is completed successfully
The historic and successful Hubble Servicing Mission 4 concluded with a trouble-free Space Shuttle landing on May 24. During a series of unprecedented spacewalks, astronauts replaced and repaired a total of four instruments. This servicing mission was an intense, 13-day undertaking that revitalised Hubble, making the telescope more capable than ever. All mission objectives were accomplished during five spacewalks that totalled 36 hours, 56 minutes. During the spacewalks the astronauts delivered two new instruments, the Wide Field Camera 3 (WFC3), which replaced the workhorse WFPC2, is the first single instrument on Hubble to be able to image across the infrared, visible and ultraviolet wavebands. The Cosmic Origins Spectrograph (COS) will help astronomers to determine the chemical composition and evolution of the Universe. Both instruments use advanced technology to improve Hubble’s potential for discovery dramatically and enable observations of the faint light from young stars and galaxies in the Universe. Astronauts were also able to repair the Advanced Camera for Surveys (ACS) and the Space Telescope Imaging Spectrograph (STIS) that were both affected by power failures. A nail-biting moment occurred when the astronauts had to conquer a stubborn bolt on a handrail attached to STIS before taking on the tall task of removing 111 screws to access the instrument’s power supply card. ACS’s powerful imaging capabilities at ultraviolet and optical wavelengths are now both available, although its High Resolution Channel could not be fixed. ACS now perfectly complements the powerful new Wide Field Camera 3, and the duo will be vital for the study of dark energy and dark matter.
(NASA News release & picture)
The View from 1969
By Mike Frost
I was clearing out the loft of my parents’ house, when I came across a box full of magazines which I had stored away in my youth – “All about Science”, “Purnell’s Animal Life” and other worthy journals. I threw almost all of them into the local recycling bin, but one magazine caught my notice and was given a reprieve. It was the May 1969 edition of “Science Journal” (price, five shillings) and it was subtitled “Special Issue – MAN ON THE MOON”. The cover showed an artist’s impression of a lunar module descending to the lunar surface.
It was hardly surprising that Science Journal chose such a special issue. Five months previously, in December 1968, Apollo 8 became the first manned spacecraft to break free of Earth’s grasp and go into orbit around another world. On Christmas Eve 1968 it passed behind the Moon and provided the crew with the unforgettable view of the Earth rising above the Moon’s surface. The Apollo program continued at what now seems like breakneck speed. Apollo 9 tested docking manoeuvres in Earth orbit in early March 1969, Apollo 10 descended to within 42,000 ft of the lunar surface in mid-May, and Apollo 11 became the first manned mission to land on the Moon on July 20th 1969. My seventh birthday was in July 1969, and I remember watching Apollo 11 take off from Cape Kennedy – we watched TV in the school assembly hall. My parents woke me up early in the morning of July 21st, once they had established that the Eagle had landed safely, and we watched the momentous pictures of Neil Armstrong stepping onto the lunar surface.
Forty years on, Science Journal is a fascinating read. In an editorial, Robin Clarke made clear that the journal was not intended to be just a souvenir of the Moon landing program. “The time for purple prose about manned space flight has now ended and in its place must come the rational examination of man’s future role as an inhabitant of globes other than the Earth”. An ambitious aim!
In addition to news items and regular columns (including one called “Playback” by Edward de Bono), there were ten substantial articles, each written by a leading astronomer. I recognised one or two names. Harold Urey, nobel prize winning chemist, contributed to “The Geophysics of the Moon”. Ernst J. Opik of the Armagh Observatory wrote informatively on “The Lunar Environment” – you might not have heard of him, but you probably know of his son, Lembit, Liberal Democrat M.P., spokesman on Earth-crossing asteroids, renowned for his choice of girlfriends, from weather girls to Cheeky Girls. And I remembered Dr. R.A. Lyttleton from an archive edition of “Sky at Night” on the BBC magazine’s CD.
Dr. Lyttleton’s article expanded on his views on how the Moon could have formed. Could the Earth and the Moon have formed from a single body? Almost certainly not was his conclusion. Could the Moon, Earth and Mars have formed from a single body? – possibly yes. Could the Moon have been captured by the Earth with the aid of the Sun? – again, possibly yes. But I was most impressed with his fourth possible explanation. Is it possible that the Earth-Moon system was formed by some process not yet appreciated or thought of - most definitely yes. Such a candid answer is unusual! In the 40 years since 1969, computers have become vastly more powerful and we can now model various theories for the formation of the Moon. The current best bet is a collision between a Mars-sized body and a proto-Earth in the early years of the solar system. But perhaps we have still yet to appreciate or think of the actual version of events.
I had two favourite articles. Zdenek Kopal from Manchester University wrote about “The Moon and man”, summarising the past, present and future of manned lunar exploration. Past exploration was largely fanciful, incorporating Kepler’s Somnium and Cyrano de Bergerac. But Prof Kopal took time to tell us about the most famous fictional journey to the Moon, undertaken by Jules Verne’s intrepid explorers. The launch mode, a gigantic gun, was a complete non-starter, as in reality the g-forces on firing would easily be enough to obliterate anyone within the projectile. But did you know that Jules Verne’s chosen launch site was in Tampa Township, Florida, less than 100 miles from Cape Canaveral? Verne may not have known how to launch a crew safely into orbit, but he did appreciate that launching from close to the equator was a good idea, as it gave a free boost to the launch speed.
Kopal indulged in some flights of fancy of his own, when he envisaged the practical benefits of manned lunar exploration. He correctly pointed out that the immediate benefit would be from the return of lunar material to Earth for study, to find out the composition of the lunar surface, giving clues to lunar formation and also to the material which had fallen on it. For the longer term, however, Prof. Kopal decided that the greatest benefit of lunar travel was the collection and transmission to Earth of solar energy. He envisaged solar energy being beamed to Earth as laser light, for collection and re-transmission as cheap energy. Prof Kopal pointed out that such laser light “could, in unscrupulous hands, become a weapon whose destructive effects could be comparable with those of hydrogen bombs”. Laser beams from space remain a staple of speculative fiction (in at least one James Bond film) and have so far failed to realise their potential for cheap energy.
In my other favourite article, “The lunar colony”, Rodney Wendell Johnson of NASA’s Advanced Manned Missions Program Office, Washington D.C., is optimistic. “A colony on the Moon may have been established by the United States within a decade”. Johnson described clearly and in detail how such a colony could develop, building around the landing craft of the later Apollo Missions.
What shines through the journal is the heady optimism of the sixties. Anything seemed possible in May 1969. The pace of progress was break-neck, and had that pace continued it is possible that Mr. Johnson’s dreams of lunar colonies by 1979 might just have taken place. But, as we know all too well, the enthusiasm for lunar exploration began to dwindle even as the first men on the Moon were taking their historic steps.
The unpalatable truth, unmentioned by virtually all the contributors to Science Journal, was that the race to the Moon was an artefact of the Cold War between the U.S. and U.S.S.R. Once America’s technological superiority had been demonstrated so clearly by winning that race, there was no longer any political will to continue with the Moon landing program. Even as real, valuable science was being done by the later Apollo missions, that program was being wound down.
It’s now 2009. Finally, perhaps we are beginning to see a new race to the Moon. Perhaps this race is also driven by national politics, in this case principally by the Asian tigers. At present there are Japanese, Chinese and Indian space probes in orbit round the Moon. Maybe the next man, or the first woman, on the Moon will be Chinese. If so, I forecast that conspiracy theories about faked moon landings will vanish overnight, as even the most rabid American conspiracy theorist will not want to see China steal America’s thunder by “getting there first”.
And I think the template for future lunar exploration is the example of the South Pole. The race to the Pole concluded in 1911, when Roald Amundsen’s technologically superior expedition beat Captain Scott’s brave but under-equipped team to the Pole. Once the race was over, interest was lost in visiting the South Pole for almost 50 years. Not until 1956 did another man stand at the Pole, but this time he arrived by air and a permanent settlement was established. Let’s not kid ourselves that this too is not a political act - the territorial claims of 7 nations converge at the South Pole, but the base belongs to a non-claimant nation, the USA. But let’s appreciate that the South Pole is now available for non-political scientific research, including world-class astronomy. It’s my hope that the history of the South Pole provides an example of how a future lunar colony could prosper and benefit mankind.
By Paritosh Maulik
The value of acceleration due to gravity, g, is generally accepted as 9.8m sec-2. But this is not a fixed value. Local geology such as mountain or large mineral deposits can alter the magnitude of g. Accurate measurement of local magnitude of g, helps to identify such anomalies, for example, a possible mineral deposit. Additionally, if we can define an imaginary surface of constant gravity on the Earth, it can act as an ideal reference surface for the determination of height.
These measurements require very accurate determination of the strength of the gravity field and are best carried out with instruments carried in satellites orbiting the Earth. We shall look at such an European satellite to be launched soon to carry out an accurate gravity mapping of the Earth
The velocity of an object, dropping freely towards the ground (the Earth) increases as it approaches the Earth. The greater the distance of the drop, the higher is the velocity. The object is accelerating, caused by the Earth’s gravity. If the distance of fall is small compared to the radius of the Earth, the acceleration is constant and is independent of the composition or the mass of the object. This would be true, if there was no air resistance to slow the object down.
This acceleration due to gravity, g, is nearly 9.8m sec-2.
Over the years as our measurements have got better, it became apparent that the g is not strictly a constant. It varies with latitude and land mass. The Earth, due to its rotation, is not a perfect sphere. It is somewhat flattened near the poles and bulges at the equator. So the distance of a point on the equator from the centre of the Earth is larger than the poles: and as a consequence, the g at the equator is 9.78m sec-2, while at the poles it is 9.83. An object weighs less at the equator than at the poles.
The Earth is not homogeneous, mountains and deep ocean trenches can cause localised variation of the mass of the Earth. These also affect the localised value of the g.
Big buildings, mineral deposits, oil and gas deposits also influence the localised value of g. All these factors superimpose on each other and also changes with time.
The simplest method of measurement of g is from the law of the pendulum. The time, T, taken to complete oscillation that is left to right and back to left again of a pendulum of length, l, is given by
From this simple law, the early geologists found that the value of g increases near a mountain. Christiaan Huygen made the first pendulum clock and French astronomers observed that a pendulum clock looses more time at the Equator than in Paris. The implication was gravity was less at the lower latitudes. This observation started a great debate on the shape of the Earth.
Now we know that the Earth is an ellipsoid and not a sphere. When we measure height, we measure it with respect to a flat surface. We define an imaginary ideal surface for this purpose called, Geoid. The concept of Geoid was introduced by the German mathematician Gauss. Gravitational potential is uniform on this surface and therefore a ball on this surface would not roll down. It is the perfect horizontal surface. Although we talk about sea level, in fact the surface of the sea is not flat. The extra mass of a seamount can attract water and can raise the height of the local sea surface and similarly a deep sea trench can cause a localised depression of the sea surface.
The surface of the continent, 4 is not a smooth surface. It is full of mountains and ocean ridges. On a calm day the ocean, 1, appears to be flat. 5 is the geod or the imaginary surface of equal gravity. This is not a smooth surface, but a local plumb, 3, line is always 90° to this line. 2 is the ellipsoid, the average height from the centre of the Earth, 2, is a smooth surface.
1 Ocean; 2 Ellipsoid; 3 Local plumb; 4 Continent; 5 Geoid
If all the oceans were in equilibrium and connected through narrow canals, the geoid would coincide with this surface. There is another reference surface called reference ellipsoid, it is the average of all the high and low points on the Earth and is a smooth surface, where as the geoid is not a smooth surface, as it can vary between 200m (-106 to +85m).On an imaginary sea surface, unaffected by weather or waves, ships float on the geoid. However a GPS receiver on board a ship, during its long voyage can detect height variation, even although the ship is at sea level. This is due to the fact that the GPS system measures orbits about the centre gravity of the Earth and measures relative to geocentric reference ellipsoid.
Why we need an accurate map of the Earth’s Gravity Field.
Although the motion of water in the ocean appears to be random, there are well defined streams in the ocean. A well known example is the Gulf Stream, it carries warm surface water from the Gulf of Mexico towards the northeast and keeps the coastal areas of northern Europe about 4°C higher than a place in the North Pacific at the same latitude. All these currents of water in the ocean ultimately control the weather on the Earth. Therefore, a better understanding of the ocean current would help us to build better model of Earth’s weather. In order to do this we need an accurate geoid map of the Earth, so this can act as a reference. The height of the ocean is measured by radar altimetry. So comparing the radar data with the geoid map, the topography of the oceans surface is mapped.
Movement on Earth
Mountains rise from the ground and have their roots well below the ground level. One example is the rise of Himalayas. As the Indian plate moves north, it sub-ducts under the Asian plate and in the crumple zone, the Himalayas rise. Molten magma can rise though the lithosphere and come out as volcano or it can move up part of the way and then move along side ways. Phenomenon like these can be studied by gravity field maps and seismic data up to a depth of about 200km.
The thick covering of ice during the last ice age caused the land mass to depress, but now that the ice has melted, the land is rising. Scandinavia is rising at a rate of about 1 cm per year. Also Scotland is also rising, while the south-east of the UK is slowly sinking.
Gravity variation plot in xx, xy, xz, yy, yz, zz planes
Three pairs of proof masses and the distance between each pair is 50cm. Each mass experiences difference gravitational acceleration. Three axes allow simultaneous measurement of gravity in six planes.
This relates to the measurement of the shape of the Earth. Such information is important for the determination of flatness of the ground and is useful for the construction of large sites and controlling the flow of water. It is also use to measure the coastal height and the tide gauges.
Accurate determination of the sea level change due to the global warming is becoming an important issue.
Measurement of Terrestrial Gravity
There are basically two types of instruments are used to measure the local gravity; relative and absolute gravimeter.
Gravimeter is essentially an accelerometer. An accelerometer can measure g up to the order of 1000g. Gravimeters are used to measure small changes in Earth’s gravity of 1g. Like a spring balance, a weight is suspended from a calibrated spring and the extension of the spring is measured. The calibration of the spring is carried out in an area of known gravity. A modern version of the instrument can measure gravity within the accuracy of 1µGal, or (0.1nms-).
In an absolute gravimeter, a mass is allowed to fall freely in vacuum. The velocity of the mass is accurately determined by a method based on optical measurements. From the velocity, the acceleration or g is calculated. In a more accurate instrument, the mass is allowed to rise and fall and the velocity is measured in both directions. This reduces the instrumental error.
Mapping of the Gravity variation of the Earth
Such an accurate measurement of gravity of the Earth is best carried out instruments aboard spacecraft. The European Space Agency is to launch a mission called Gravity Field and Steady – State Ocean Circulation Explorer. (GOCE). The instrument would aim to measure the gravity field variations to an accuracy of 1mGal (= 10-5 ms-2), and determination of the geod with an accuracy of 1 – 2cm, resolution better in 100km.
In order to pick up a strong gravity signal, the satellite will fly low, at a height of about 250km. At this altitude the satellite will face the drag from the atmosphere. The streamlined shape of the satellite helps to reduce the drag on the satellite. Ion propulsion technology will maintain the satellite at a constant height. The 1050kg satellite is 5m long and the cross section is about 1m. It is symmetrical in shape and there are two small wings, one pointing upwards and one pointing downwards, for additional stability. There are solar panels on the body and on the wing. During its flight, one side of the satellite will always face the Sun and the satellite will experience temperature in the range of 160°C to -170°C. During its flight the satellite will encounter severe temperature fluctuation. Precautions have been taken to overcome this. In order to protect the main measuring instrument from wide temperature fluctuations, it is protected by its own temperature control unit.
There are communication antenna on each wing. There are two antennae on the upward pointing wing for communication with the GPS system. This arrangement of antennae gives the full spherical coverage. The satellite can be tracked simultaneously by 12 GPS at one time.
A global network of Laser ranging stations monitors the accurate position of the GOCE satellite. The duration of the mission is 20 months including calibration period of 3 months.
Measurement of Gravity
The principle involved is as follows. A set of proof masses in a satellite orbits around Earth. The change in gravity experienced by the mass is measured over a short distance of travel around the orbit. This is gradiometry. At the heart of the satellite there is a Gradiometer. In this instrument, a proof mass is kept suspended at the centre of a cage, by servo-controlled electrostatic force. Any movement of this proof mass in translation or rotation is picked up capacitance bridge and this signal in turn controls position of the proof mass. The electrostatic suspension mechanism monitors the linear and rotational motion of the proof mass.
Three pairs of identical accelerometers, form three gradiometer arms. These arms are perpendicular to each other and are about 50cm apart. Average of two accelerometer readings is measured and this difference is proportional to the drag due to an object on the Earth. One arm points to the satellite trajectory (velocity direction), another arm is at 90° to the velocity direction trajectory while the third arm points approximately to the centre of the Earth. The variation in gravity is determined from these three sets of accelerometers readings. The 50cm separation controls the base line of gravity measurement
GOCE was scheduled to be launched by a Russian rocket vehicle – a converted SS-19 Russian Intercontinental Ballistic Missile – from the Plesetsk Cosmodrome in northern Russia in September 2008. But due to some technical difficulty with the launch vehicle, the expected launch date has been delayed; now the rescheduled date is 16th March 2009.
Before the GOCE mission there were two similar earlier missions CHAMP (Challenging Mini-Satellite Payload for Geophysical Research and Application) from Germany, July 2000. The goal was to map the gravity field and the magnetic field of the Earth by flying over a height of 450 to 300km above the Earth. It carried tri-axial electrostatic accelerometer and magnetometer. The satellite was tracked by the GPS system.
GRACE (Gravity Recovery and Climate Experiment) a joint USA and Germany mission, March 2002. In this mission, two satellites, with a distance between them by about 200km, on the same mean path, orbited around the Earth at a height of 500km. The tracking was done by GPS system and each satellite carried accelerometer. The life of the mission was five years and aim was to monitor the gravity variation with time from monthly to biennial with a spatial resolution of 300 – 5000km. The gravity profile determined by the GRACE probe was to determine changes due to surface and deep current in the ocean, run off and ground water storage, exchanges in the ice sheet and glaciers. It has determined the changes in the sea bed movement following the 2004, Asian tsunami. It has measured the loss of mass of the snow in Greenland.
Greenland snow mass loss for the years from 2002 to 2005
GOCE will provide the Gravity field mapping of the Earth to a very high accuracy. Such data would be useful to the study of the Earth Science for the understanding of the interior of the Earth and above the surface. It will also provide useful information on solid Earth mass distribution, sea level change, ocean flow and associated heat flow and also on climate change.
Localised Gravity Mapping, an alternative option
One UK based company is developing a gradiometer using superconducting magnets. The principle is Meissner effect, which states that any magnetic field can not enter inside the superconductor. In a superconductor, a current keeps on flowing without any decay. If there is no disturbance in the system, the current and hence the magnetic field remains unchanged. But, if there is a movement in the system, for example caused by the change in localised gravity, there will be a change in the magnetic field and this is detected to measure the change of gravity.
A set up based on the above principle could put on a rigid structure and could be carried on board in a plane or a ship to scan over the area of interest. In order to eliminate the effect of the motion of the carrier causing any disturbance in the system, differential readings from two superconducting proof masses are used. The system operates at -4K (-269°C) and is kept in vacuum. It is vital that the system points vertical all the time. All these efforts are to increase the sensitivity of measurement.
Such an airborne system becomes a cheaper alternative to a satellite born system as discussed earlier and is ideally suitable for mineral exploration. With a system like this, one can avoid issues such as trespass or measuring over swamp. A system on board a watercraft would be less sensitive, but may act as compliment to other measurement.
SIDs and Pings
By Mark Edwards
This is the second of my articles (following on from “Radio Astronomy with a Satellite Dish”) (see MIRA 77, Autumn 2006: Ed) showing how it is possible to do radio astronomy at home without particularly specialised equipment. My previous article described passive radio astronomy where the aim was to detect radio waves emitted directly from objects in the universe. However, there is another way of doing astronomy with radio waves and that is to generate them here on Earth, send them out to the object under study and attempt to receive any reflections.
There is an obvious problem with this in that the huge distances involved restrict any practical observations to objects close by within the Solar System, such as the Moon and planets. Even with that constraint, the large amounts of power required to obtain detectable reflections even from the Moon require equipment that most people would not possess, nor have a licence to operate. There is however an object closer to home from which it is much easier to get reflections, this object is the Earth’s ionosphere. Not only can we study the ionosphere itself by probing it with radio waves, but we can also study the changes caused in it by external influences.
The major influence on the ionosphere is of course the Sun. The Sun’s radiation, mainly ultra-violet light, causes the Earth’s upper atmosphere to be ionised into a series of layers at heights ranging from 50km to 400km and it is by reflecting off these layers that radio waves can be sent around the Earth. By systematically monitoring the strength of such reflections it is also possible to work backwards and deduce something about the state of the Sun’s radiation.
Normally, when the Sun is quiet there is a regular pattern to the ionisation. During the day, when bathed in sunlight, the amount of ionisation in the layers increases and the layers form closer to the Earth’s surface. During the night the reverse is true, as the ionisation reduces and the layers ascend.
However, when the Sun is more active, a sunspot can generate an intense burst of X-rays which when it hits the Earth can cause the amount of ionisation in the layers to suddenly increase. This can have a dramatic effect on the reception of distant short-wave stations as the increased ionisation of the lower layers of the ionosphere can cause their radio waves to be absorbed completely before they reach the upper reflective layers. Such disturbances are called “Sudden Ionospheric Disturbances” or SIDs.
However for very low frequency (in the range of about 3kHz to 30kHz) radio waves, the increased ionisation of the lowest (D) layer can instead enhance their reflection. For this reason, and because propagation at these frequencies is general very stable during the day making SIDs more visible, I shall describe a simple way of making a VLF SID monitor.
The first thing that we require is a source of VLF radio waves, preferably placed a few hundred kilometres away, that is constantly transmitting. We could make one ourselves, but conveniently for us, governments around the world provide a number of very powerful VLF transmitters to communicate with their submarine fleets that we can use instead (VLF radio waves being the only ones that can reach any depth in the oceans).
Examples of such transmitters in Europe are:-
Frequency Call sign Location Distance from
(kHz) Coventry (km)
19.6 GBZ Skelton, England 270
20.27 ICV Iola di Tavolara, Italy 1530
20.9 FTA Sainte-Assise, France 520
21.75 HWU Rosnay, France 660
22.1 GQD Anthorn, England 300
22.6 HWU Rosnay, France 660
23.4 DHO38 Rhauderfehn, Germany 620
Secondly, we need a receiver that can receive one or more of these transmissions. Luckily for us most PCs already contain one in the form of a sound card. A sound card is designed to take an electrical input, with a frequency range of 20Hz to 24kHz, from a microphone or other audio device. However, there is no reason why the source of the electrical signal should be a microphone, it could just as easily be an antenna. In this case the sound card behaves as a VLF radio receiver!
Normally, to make an efficient antenna that would be sensitive to the electric field of a radio wave, we would use a length of wire whose length was at least a quarter of the wavelength of the radio waves we were trying to receive. However, as the wavelength at VLF ranges from 10km to 100km, this is impractical.
Instead, if we make one that is sensitive to the magnetic part of the wave, a small loop antenna is reasonable efficient. Such an antenna can be easily constructed from a length of screened multicore cable of the type sold by Maplin for connecting SCART plugs together. I use a 6m length of the one that contains 20 cores, with the cores wired in series to make a single loop of 20 turns, about 2 metres in diameter.
One end of this loop is connected to the inner of a screened cable that plugs into the microphone input of the sound-card. The other end of the loop is connected to the outer of the cable and also to the screen of the multicore cable (make sure that you only connect one end of the screen of the multicore cable, or you will short out the loop!), as shown in the diagram Figure 1.
The screening helps to reduce the antenna’s sensitivity to locally generated electrical noise.
Once made, the antenna can be mounted indoors, I hang mine vertically on the wall. Make sure that it is reasonably circular (to maximise its area) and is well away from electronic equipment, eg. PCs, TVs, power supplies, etc. Although the plane of mine points ESE-WNW, you can experiment with its orientation to maximise the received signals.
Recording and Analysis
Now all we need is a method of recording the output of our receiver. Luckily someone else has done all the hard work for us. A German radio amateur with the call sign DL4YHF has developed a program called SpectrumLab that works as a multi-channel receiver and logger. It can be downloaded for free from his website:
SpectrumLab was designed to take an audio signal fed into the sound card of a PC and to split it up into its component frequencies using a mathematical technique called a fast Fourier transform or FFT. In this respect it does for sound what a prism or a diffraction grating does for light, it produces a spectrum, Figure 2.
The top half of the display shows the current spectrum and the bottom half shows how that spectrum has varied over the past few minutes in a “waterfall“ display that moves downwards with increasing time.
In the display you can see that there are a number of peaks in the spectrum (represented as vertical lines in the waterfall display), most of these are signals from the transmitters that we are interested in. However, always be aware that they could be from local sources of interference, the most common one being at 15625Hz and caused by the line timebase oscillator of TVs.
Once the real transmitter peaks are identified we need to measure their strengths and to log the resulting measurements for later analysis. SpectrumLab provides this ability in the form of a “watch list and plotter”.
An example of a watch list window is shown below, Figure 3.
Each row in the list represents a transmitter to be monitored, the “Expression” column defining the frequency range of that transmitter. I use a range of 75Hz either side of the carrier frequency, to cover any modulation that the transmitter might use. The “avrg()” expression meaning that the signal strength is averaged over that range and helps to reduce the noise in the measurement.
The “Result” column gives a real-time display of the resulting amplitude of the signal, which is written to a log file and also displayed in the plotter window, Figure 4.
The plotter window scrolls sideways from the right, showing each monitored frequency range, ie. transmitter, in a different colour. Any SIDs that might occur are then be easily seen as a simultaneous change in all the traces. Below is an example, taken from a log file, of such a SID that occurred on 3rd November 2008 and corresponded with a B8 class X-ray flare from the Sun.
Note that the website: www.swpc.noaa.gov/rt_plots/xray_1m.html gives a real-time display of the X-ray flux from the Sun that can be used as a comparison.
Here the signals from five transmitters have all been affected by the SID, Figure 5, but notice that only one signal has been enhanced, the other four signals experiencing a drop in signal strength. Which way the signal is affected depends on the phase difference between that part of the signal that has been reflected off the ionosphere (the sky wave) and that part that has just followed the surface of the Earth (the ground wave).
Another influence on the Earth’s ionosphere comes from a source more familiar to us, this is from the ionised trails left by meteors as they burn up in the Earth‘s upper atmosphere. Although the ionisation might not be intense enough to cause light to be emitted and be visible to the eye, it can still be sufficient to reflect radio waves. This means that radio counts of meteors are generally many times those of visual counts and of course the counts can be made during daylight hours when visual observation is impossible.
Whereas with SIDs we used a very low frequency of radio waves to obtain a constant reflection from the lower layers of the ionosphere, with pings we want to avoid any reflections from the ionosphere completely and only get reflections from meteor trails. The best frequencies to do this are found to be the ones in the range of about 30MHz to 300MHz, known as very high frequencies (VHF), which normally pass right through the layers into space.
As with the VLF transmitters, we are lucky that there are again many high power VHF transmitters around Europe that transmit continuously. This time they are used for broadcasting TV programmes and the ones that best serve our purpose transmit on frequencies around 50MHz.
These frequencies used to be used in the UK for BBC1 405 line black and white TV up until 1984, but currently (except for the odd analogue cordless phone, baby alarm or radio amateur) are largely unused. This is convenient for us as it allows us to listen for reflections of transmitters in the rest of Europe.
The most common frequencies used are:-
Channel Vision Carrier Freq Sound Carrier Freq
E-2 48.25 53.75
E-2A 49.25 55.25
E-3 55.25 60.75
E-4 62.25 67.75
I use the vision carrier of channel E-2 (48.25MHz) which is transmitted from Portugal.
For the receiver any communications receiver, or scanner, will do so long as it is capable of producing a tone when receiving a carrier without modulation. To be capable of doing this it must have a receiving mode of either SSB (single sideband), USB (upper sideband), LSB (lower sideband) or CW (carrier wave or morse code).
Setting the receiver to anyone of these modes will cause a “ping” to be heard when a meteor enters the atmosphere and its ionised trail briefly reflects the radio waves being emitted from the distant TV transmitter. These pings can then be sent to the sound card of a PC for analysis.
Antennas at VHF are a reasonable length as the wavelength of a 48.25MHz radio wave is only just over 6m. I use an antenna designed for the 6m amateur band, as shown here attached to the side of my house, Figure 6.
At these frequencies it is important that the antenna be mounted outdoors to avoid any loss of signal and increase in noise. Also notice that it is a vertical antenna, so it is reasonably sensitive in all directions, except for directly overhead. This is handy as it does not need to be specifically pointed.
If your receiver covers the frequency, a check can be made that the receiver and antenna are sufficiently sensitive by listening to the amateur beacon GB3BAA on 50.016MHz located in Tring. As this has only a power of 10W, if you can hear it you can be certain that meteor reflections will also be audible.
Recording and Analysis
As with SIDs, SpectrumLab is used to monitor and log the output of the receiver. This time though we are interested in producing a spectrum of the received audio in the range of 300Hz to 3.3kHz, which is the usual output range of a communications receiver.
Somewhere in this range you should be able to see the pings as they occur. It is then a matter of adjusting the frequency of the receiver and the range of the spectrum to produce a good display. I adjust the receiver to give a ping frequency of 2.4kHz and the spectrum to have a range of 2kHz to 2.8kHz as that on my system gives the best signal to noise. An example of a spectrum obtained with this setup during a meteor shower is shown in Figure 7. In the picture individual pings show up as short dashes, the longer trails are produced by fireballs, some of which can last for many minutes.
To log the pings, again a watch list is set up, but this time instead of averaging over a frequency range the “Expression” is set to “peak_a()“ to record the peak that might occur anywhere in the range of 2200Hz to 2500Hz. This is to allow for any drift in the transmitter’s or receiver’s frequency.
The “Result” column gives a real-time display of the resulting amplitude of the signal, which is written to a log file and also displayed in the plotter window, Figure 8.
Here the pings are easily visible, Figure 9, as peaks above the noise floor. From the log file the number of pings can be easily counted and plotted against time to give a graph such as that shown below for the Quadrantids shower, Figure 10. The interesting and dramatic dip in the ping rate around hour 8 was caused when the shower’s radiant passed directly overhead.
In conclusion, I hope that these two projects have shown how easy it is to do some useful radio astronomy without having to leave the comfort of your armchair in front of the PC.
It’s always cloudy outside, anyway!