Stay Up to Date
Subscribe to our quarterly updates for all the latest MinEx CRC news
Radiometric data are generally presented as data that are equivalent to concentrations of potassium, thorium and uranium. These data are acquired usually from airborne platforms (often alongside magnetic data) using a crystal which responds to energy from gamma radiation resulting from the radioactive decay of these elements.
Most users will only encounter them as levelled images of data from multiple surveys, but they require processing to ensure that surveys acquired at different times produce the same results through:
Radiometric signals are only derived from the upper 2-3m of the earth’s surface as that is the maximum penetration of the gamma radiation. Over 98% of the signal comes from the top 35 cm. It is best regarded as reflecting surface conditions only with no depth information provided. Radiometric images therefore are useful in the interpretation of regolith environments, especially as weathering processes can concentrate particular elements relative to others e.g. production of clays will produce higher potassium readings. Radiometric signatures can arise from a number of sources of changes of concentrations of the three elements, however, including bedrock processes such as fractionation of igneous intrusions; these can only be imaged, however, if they are exposed at surface.
Radiometric images are generally presented as a ‘three colour’ additive image. The red within the image represents potassium concentration, green is thorium concentration and blue uranium. Interpreting these images requires an understanding of how colours mix. Separate to the individual colours representing high values in one element relative to the others:
There are a range of additional data products which can be produced from the data such as:
Their use is generally restricted to particular interpretation problems e.g. element ratios can be a quick method to interpret weathering environment in a given region.
Resistivity and induced polarisation surveys are types of measurements generally conducted on the ground with an aim to imaging the electrical structure of the earth in a relatively localised area. These measurements are made using electrodes placed into the earth and then transmission of electrical energy is made. Note that resistivity is the reciprocal of electrical conductivity, so something with low resistivity has high conductivity. Polarisation is a measure of how strongly the field can be ‘charged’ and then decay.
Different geometries of electrode distributions can be done to produce measurements that vary with depth (depth ‘soundings’) and measurements can be made at multiple locations/electrode spacings using electronic switching to produce imaging of the subsurface in sections or over areas to produce 2D or 3D images of the subsurface.
As large power levels are needed, often these surveys are restricted to measuring along 2D sections and to relatively shallow depths of up to a couple of hundred metres at most.
These techniques are often used to image groundwater, clay or sulfide distribution. Induced polarisation surveys can be particularly useful in areas of disseminated sulfides where the conductive sulfide grains are not directly connected and thus do not produce appreciable electrical conductivity, but they will provide a strong polarisation response.
Pseudosections of resistivity or IP response can be used for qualitative interpretation e.g. finding areas of conductivity within a resistive host, but generally either models of layered earth to explain soundings or inverse modelling of sections are conducted to produce sections or plan views of survey results.
Electromagnetic surveys (EM) measure responses from the induction of secondary electrical fields within electrically conductive regions of the earth by primary varying electrical fields. These electrical fields either vary through frequency changes (frequency-domain EM) or time changes in electrical fields (time-domain EM). Switching an electrical field on or off produces an electromagnetic pulse which decays; this decaying field will induce a primary magnetic field, which then induces a secondary electrical field in electrically conductive media, and in turn produces a measurable change in a secondary magnetic field. Often the measurement is not directly the secondary magnetic field but instead the rate of change of the secondary magnetic field; these are proportional except in the presence of highly conductive materials. Although often associated with airborne acquired data, electromagnetic data can be acquired using ground-based surveys as well as airborne surveys.
FEM surveys are conducted using either one or multiple frequencies of electrical data constantly changing. For these surveys the phase shift and amplitude change between the primary and secondary electrical fields are the key data. For moderately conductive materials the phase shift (the ‘quadrature’ response) is proportional to the conductivity and so can be useful especially for quick near-surface mapping. Lower frequencies of transmitted electrical energy will penetrate deeper for a given conductivity and so can provide a method for producing depth-varying data.
TEM surveys, however, are conducted using a primary electrical field which switches on and off for a given duty cycle. Upon the transmitter switching off the electrical field is then measured at multiple time windows as it decays, producing data that effectively are responding to deeper sources at later times.
The depth penetration of both styles of EM surveys also depends on the conductivity of the subsurface, with more conductive regions producing shallower depths of investigation.
Due to the complexity of the data received, interpretation generally is restricted to models of the subsurface that explain the electromagnetic responses. These models are derived using inverse modelling methods, and can vary in the physics that they simulate from simple one-dimensional depth soundings through to full 3D simulation of the electrical and magnetic fields.
For a first-pass view, often ‘conductive-depth images’ or transformations (‘CDI/CDT’s) are used to examine the data. These are not subsurface models and can produce a distorted view of the subsurface but can highlight data quality issues or a simple qualitative assessment of the subsurface. Plan view maps of data, either based on frequency (FEM) or measurement time windows (TEM), can be produced to provide pseudo-depth slices but should be interpreted very cautiously as areas of higher conductivity will be shallower for a given time window or measurement frequency.
Given the variable choices of forward simulation of the electrical field, plus a multitude of inverse model solvers, it is difficult to provide general advice to the use of modelled EM data for interpretation.
Gravity and magnetic data are often grouped together as ‘potential field’ data, as both these data types relate to mapping the differences of potential energy for a particular field of the earth’s response. (Many people might be familiar with the concept of gravitational potential energy: carry a rock up a hill and the change in gravitational potential is what provides the kinetic energy when you roll it back down).
Magnetic data result from the induction of magnetic field from the earth’s planetary magnetic field; gravity data arise from differences due to mass distribution within the earth.
Due to the physics of these phenomena magnetic data sense more shallow data than gravity data: magnetic data signal strengths are reduced 8x for doubling of distance to a magnetic source, whereas gravity signals are reduced by a factor of 4 for a doubling of distance to a mass variation. In addition, rocks can only have a magnetic field associated with them while their magnetic minerals remain cooler than the temperature above which they lose magnetic properties (the Curie or Néel temperature). As the earth’s temperature increases with depth, at some point a rock will be hot enough to no longer remain magnetisable.
Gravity and magnetic data can be both acquired on the ground or through airborne platforms: typically, however, magnetic data are acquired from aerial surveys and gravity from ground surveys. Some airborne gravity platforms measure gradients in the field which are the rate of change of the field rather than the field itself; gravity gradient techniques have signals which decay at the same rate as magnetic data.
Although magnetic data are generally presented as ‘total magnetic intensity’ (TMI) which is the field strength of the magnetic field locally, magnetic fields vary in their orientation. Many people will be familiar with magnetic declination which is the orientation of the field horizontally with respect to true north, but magnetic inclination (angle of the field to the horizontal) also varies. At the magnetic poles of the earth the magnetic field either points directly away from (north pole, by convention) or into (south pole) the earth. This orientation of the field varies the shape of geophysical responses according to the local inclination of the field. A processing technique called ‘reduction to the pole’ aims to reduce the effect of magnetic inclination on the appearance of geophysical anomalies, such that they appear to have been recorded at the magnetic poles alone. This has the effect of centring anomaly peaks over their sources. This correction varies with magnetic inclination, so thatover regions larger than a few degrees of latitude the correction needs to be varied (‘variable reduction to the pole’ or vRTP).
Magnetic data also need to be interpreted with reference to remanent magnetic fields. Rocks that contain magnetic minerals can retain magnetisation from prior orientations of the earth’s magnetic field. This remanent field can be quite strong, and, depending on its orientation can add or subtract from the total magnetic field induced from the earth’s present-day magnetic field. Interpretation of vRTP data can highlight remnant areas where remanent magnetisation is dominant. For instance, where known dipping bodies have strong magnetic remanence they might produce an anomaly which does not correlate with the known dip compared to the geometry of the measured magnetic field. In general, however, such bodies are not recognisable from direct interpretation.
Gravity data, in contrast, does not contain such artefacts although it needs to be processed to account for the geometry of the earth before being interpreted and presented as images or modelled using forward or inverse modelling. Mass distribution changes larger than the scale of your individual interpretation also needs to be corrected for e.g. crustal thickness will vary the gravity response and this ‘regional’ effect must be allowed for. In areas of high topography, topographic corrections may also need to be calculated as the masses of mountains or the lack of mass due to a valley locally can vary the gravity field sufficiently to be observed in the local data.
Potential field data do not have any absolute depth sensitivity: responses can occur from objects at any depth, although there is a relationship between the measured width (the ‘spatial frequency/wavelength’) of an anomaly and its potential depth. Shallow features are typified by shorter wavelength/higher spatial frequency anomalies. Deeper features potentially are expressed as longer wavelength features, however, there can be shallow and wide sources that will produce the same wavelength response as sharp, deep sources. This ambiguity in depth adds to the non-uniqueness of the interpretation and thus generally quantitative interpretation of potential field data requires supporting depth information such as seismic reflection lines to image the subsurface boundaries.
Seismic refraction is a ground-based technique whereby acoustic energy is generated at set locations and measured at other locations, providing a measure of traveltime between two points. It measures the first arrival of seismic energy, which travels by the physical phenomena of refraction along the fastest layers with depth. Receivers are a type of microphone called a geophone.
Energy can be generated by impacts from dropping a weight or using a hammer slamming onto a plate on the ground, or through other effects like an explosion. The practical depth of investigation depends upon the available energy with large distances requiring considerable energy as it disperses through the ground. The distance to receivers also dictates the maximum depth of investigation. Refraction can also be conducted in conjunction with seismic reflection surveys to provide imaging beneath the crust and using the seismic energy from the reflection seismic source.
As the seismic energy will travel along the fastest layer(s) it encounters, lower velocity layers beneath a higher velocity zone can be missed by such surveys. Additional processing of later arrivals can be conducted, producing measurements of the reflections of seismic energy from subsurface layers. This, generally, is the majority of active source seismic methods conducted.
Seismic reflection surveys, like seismic refraction surveys, impart acoustic energy into the ground and measure this energy and received stations. Although energy will arrive first from refracted events, subsequent energy arrives from echoes from subsurface layers, terms reflections. Modern regional scale seismic reflection surveys can image down beneath the thickness of continental crust and typically cover depths up to 60 km. The maximum depth is dictated by the energy provided by the seismic source and the recording time: signals
Reflections arise from subsurface reflectors assuming there is sufficient contrast in acoustic impedance to echo the seismic energy back. Acoustic impedance is the product of density and seismic velocity, so reflections can arise from more than just velocity variations. Reflectors also need to be sufficiently sized spatially to be imaged. The top and bottom of a layer can produce reflections where the layer is at least ¼ the wavelength thick: for 6000 m/s velocity typical of basement rocks and signals of 60 Hz, then the seismic wavelength is 100 m and a layer would need to be 25 m thick to be imaged. The length of a reflector able to be recovered depends on the ‘Fresnel length’ and generally features need to be larger to be imaged at greater depths, potentially up to a km or two long at the base of the crust.
Considerable processing is required to obtain an image from seismic reflection surveys, and is its own field of research. In general, data are:
These processes can be conducted in different order, for example, modern seismic processing migrate the data before adding together the data (‘pre-stack depth migration’). Many of the processing steps require estimates of the subsurface velocity, often estimated from the reflection data themselves. The quality of the velocity field used to process the data will greatly affect the quality of the processed image.
Seismic reflection data are most sensitive to horizontal structures in the subsurface and can be blind to vertical features. The steeper the dip of the feature the harder it can be to image the signal as it will only be recorded on further geophones with most of the energy departing the ‘spread’ of geophones recording. Orientation of the features also varies with processing, for example migration and conversion of data from two-way travel time to real depth will steepen a reflection in the image, will shorten the length of the reflection spatially and it will also move it up-dip to correctly place it spatially beneath the recorded seismic section. Use of too high a velocity for migration will over-correct reflections producing a distinct ‘smile’ pattern where reflections appear to arch upwards; under correction will produce ‘frowns’ with the reflection ends sagging down.
At a regional scale, often seismic data are only acquired as a section and assumptions are made regarding the structure along strike from the seismic line. Reflectors ‘out of plane’ of the seismic section, i.e. those vertically beneath the seismic reflection line, will affect signals and can produce artefacts in the final data. These artefacts can be challenging to filter out and might give spurious interpretations. The effect of geological strike compared to the orientation of a survey also needs to be considered, with the recovered image being only an apparent dip if the line does not perfectly cross geological strike. At the most extreme example, a dipping feature imaged perfectly along rather than across strike will be imaged as a flat plane rather than a dipping body.
In interpretation at a regional scale, orientation of the lines needs to be considered as rarely are they straight. Where a seismic line makes a turn then there can be less energy from beneath the line and more from the sides, affecting the assumption of being purely an image directly beneath the seismic line and complicated processing.
Seismic tomography is a technique where images of the subsurface are produced through models of velocities in the subsurface, potentially as depth slices. It is named from the latin tomo: slices, graph: image i.e. imaging by slices. Generally, this is conducted through measuring natural acoustic energy rather than actively producing energy, although it is possible to use active sources.
Natural acoustic energy is produced by effects such as ocean waves coming ashore at beaches, wind interacting with the surface, and anthropogenic noise such as vehicle traffic, plus earthquakes. This energy can travel for vast distances and interacts with subsurface velocity changes affecting its transit from source to receiver. This style of survey uses multiple receivers, in the form of seismograms, listening for a set period and often uses correlations of a site to every other site within a survey to estimate the earth structure between the sites. This can add considerable processing and results in a model of velocity for the earth, plus other parameters such as the thickness of the crust or the solid portion of the plate (termed the lithosphere) beneath the survey area. There are also techniques that do not require stations to record data at the same time as others, so long as there is sufficient overlap between areas to be modelled.
Deployment timeframe and the methods used to produce the model of the surface dictate the maximum depth of investigation of the survey. Small scale surveys with short deployment times can produce models of velocity of regolith materials, but year-long deployments over large scales can image all the way to the centre of the earth albeit with different horizontal fidelity for different depths.
The choice of modelling, and then the interpretation of the subsequent velocity models, does need to be targeted towards the desired survey outcomes. Increasingly, velocity models are directly interpreted by geologists although often they retain their maximum value if they are solved for models of earth parameters (e.g. composition and temperature) against the velocity models rather than directly interpreted in terms of low and high velocity zones and implications for subsurface structure from these images.
Magnetotelluric surveys are measurements of the earth’s naturally varying magnetic and electrical fields and provide information from the near surface to the deep lithosphere and aesthenosphere depending on the style of survey conducted. While it is possible to conduct such surveys using controlled EM sources, most magnetotelluric surveys are conducted using only passive sources. The source of these signals is primarily the interaction of solar and cosmic particles with the earth’s magnetic field and thunderstorm activity.
Magnetotelluric surveys are categorised by their frequencies of recording. Audio frequency data (AMT), capturing signals that operate in the ~20,000 to 5 Hz range and respond to conductors in the upper km of the earth. Broadband magnetotelluric (BBMT) frequencies are in the ~10,000 Hz to 0.0002 Hz (5000 s period) and provide data from several hundred metres depths to around the thickness of the crust. Finally, long period magnetotelluric (LPMT) data deal with data on the 10 Hz range to 0.000025 Hz (40,000 s period) provide information on the earth’s conductivity structure from the upper several km to below the base of the lithosphere. Data are acquired differently: AMT and BBMT data are acquired using induction coils for the magnetic field which must be calibrated to magnetic field response, and LPMT data are acquired
These data can be used in multiple ways. Qualitatively, phase tensors can be mapped out spatially to show the frequency-variable response relating to the orientation of conductors. Induction arrows for sites can also be calculated to highlight the direction towards conductors. Quantitative analysis of magnetotelluric data, like much electromagnetic data, is generally performed through the use of inverse models, either layered models for near surface structure or full 3D models.
Fundamentally, all geophysical techniques respond to changes in subsurface physical properties, producing a contrast which results in a detectable signal within a geophysical measurement. Changes in the following properties produce these responses:
Although not a physical property per se, the concentration of radiogenic elements produces gamma radiation that can be measured through chemical analysis
Measuring physical properties can be challenging and scale-dependent, and needs to account for multiple factors that might affect it. Many physical properties vary with orientation (anisotropy) and measuring must take into account the orientation of the sample in these situations, even if the anisotropy is of a scale much smaller than will produce a response in geophysical surveys.
Physical properties can also be measured in-situ using tools such as wireline logging tools. These tools effective conduct very small-scale geophysical measurements and are calibrated to produce estimates of in-situ physical properties.