Illustrations

Kappas
Geome
try
Background calculation
Using the sectored insert
The shadow effect
Tau corrections






 Illustration  Explanation

Concept of 'intrinsic' sensitivities, designated by 'Kappa', and introduced in 1989 by De Jongh for the development of UniQuant. The Kappa's are sample-independent, hence they represent 'intrinsic' instrumental sensitivities.
The Kappa's do not only deal with the sensitivity of a measuring channel for its corresponding element, but also deal with the sensitivities of the same channel for other (interfering) elements due to spectral line overlaps.

A measuring channel for the fluorescent line i has a certain sensitivity for the intended element i. This sensitivity is referred to as Kii (Kappa, i, i).

The same measuring channel i may have an (unwanted) finite sensitivity for another element j. This sensitivity is referred to as Kij (Kappa, i, j).


The classical intensity formula assumes that the shape is cylindrical rather than wedge shaped. Thus systematic errors will occur when evaluating light matrix samples. we refer to this effect as being the the Wedge Effect.

The sample is irradiated by a cone of X-Rays of which the axis makes 60 degrees (for most spectrometers) with the plane of the sample surface. On the other hand, the measuring channel looks at parallel X-rays from the sample space at a take-off angle of say 45 degrees and through an elliptical mask that projects 25 mm diameter at the sample surface. The irradiated space that is seen by the measuring channel has the shape of a wedge. The height of the wedge is about 16 mm depending of the geometry of your spectrometer.

For thick metal samples this whole issue is irrelevant because the measuring depth from the sample surface is only in the order of 10 to 100 micrometer. However, for light matrix samples like oil, water, polymers and beads, the measuring depth may be much larger than the thickness of the sample, especially for the short wavelength XRF lines, BaKa ... ZrKa. This effect is taken care of by the extended de Jongh Kappa equation used when specifying the sample height in the general data.


Improved Background Calculations

Based on our 15 year experience with UniQuant, we where able to improve the background calculation algorithm in several area's for best performance.
This with solving line overlaps once and for all was much more complicated than deriving the matrix corrections from fundamental parameters directly.

Use of Shape within a Group of channels

Background calculation is done separately for each Group of channels. Several of these channels correspond with XRF lines of elements, which are not present in the sample and will automatically serve as background channel. The principle for this the drawing of the under-contour of the measured intensities, see Picture 1.

When the distances to the supporting lines are very large (at longer wavelength) then the shape is used to calculate the ratio's between the distance and the height of the peaks, rather then using a straight line, see Picture 2.

UniQuant includes a Compton table to take care of the absorption edges when calculating the background.
The primary radiation is scattered at the sample and the scattered radiation is proportional to the Intensity of a given Lambda and inversely proportional to the mass absorption of the given lambda. Of course, absolution edges are part of this. Due to the compton shift (not sharp edge) a compton table is used to take care of that shift, see Picture 3.


Sectored Insert
Small powder samples should either be well distributed over the whole surface or should be contained in a sector such that by the spinning of the sample its intensities are averaged and simulate a full surface. A special 'Sectored Insert' is inserted in the film cup for the positioning of certain types of small samples. It has been invented by the maker of UniQuant in order to overcome a problem described above. See Drawing F.

A loose powder sample is placed in a sector of the viewed circular area. Because the sample cup is spinning, all parts of the crystal contribute in the same way as for a full area sample.

Practical hint:
It may at first be a bit difficult to avoid that the powder spills to a neighboring sector. Advice: Place the film-cup on a flat clean plate to avoid an air gap between the sector ribs and the film. Then carefully insert the sample and slightly press it with a rod that has flat end face with the shape of the sector.

For a small metal object, the latter ideal solution cannot be achieved. The best that can be done is to place the object at say 1/2 the radius. The Sectored Insert may help to keep the sample in position while spinning.


Granular material, like metal chips, drillings and powders suffer from the so-called shadow effect.

The measured radiation makes an angle of roughly 90 degrees with the incident radiation. If the grains are spherical, the illuminated part is viewed by the measuring channel as half moons. The effective area is then smaller than the flat area of the supporting film. Accordingly, the intensities measured on such granular samples is lower than on a flat polished disk of the same material. And, when UniQuant calculates the concentrations, their sum before normalization to 100% will be lower then 100%, assuming Case Number = 0 (all is known) and that the effective diameter (area) was specified on the basis of a flat sample, 24 mm diameter for example.

Now suppose that, all present elements are measured with the soft analytical lines (Like TiKa, SnLa and AlKa) such that all intensities are reduced by about the same factor due to shadow. Then all concentrations will be calculated too low by about the same factor. If the calculated concentrations are now normalized such that their sum is 100%, the shadow effect has been eliminated. This is roughly what UniQuant does. Since there is no strict proportionality between intensities and concentrations, UniQuant takes this into full account by normalizing to 100% after each single step of iterative calculations.

If the grain size is fairly constant for a family of samples, one may easilly make a copy of the existing Kappa list and call this Kappa list for example "UQPowder". With a few standards only the sensitivity loss for the lighter elements are taken into account, the sum before normalization is then 100% again.


Each time a photon is transformed to an electrical pulse, the detection system needs a short time to recover and be ready to detect a next photon. This so-called dead time, is a fraction of a microsecond.
If two photons are absorbed in the detector within a time interval, there will be just one joint pulse with amplitude (height) equal to the sum of the photon energies.
If the joint pulse falls outside the Window of the pulse height distribution (exceeding the Upper Level), it will be not be counted and we have failed to detect the photon this is the pile-up effect.
A special measuring sequence for the determination of the Tau coefficients is required, in principle for each element line group.

The corrections for counting loss are made during importing into UniQuant of the intensities produced by the spectrometer.

Top



info.spectrometry@thermofisher.com

UniQuant™ 1989...2021 is a trademark of Thermo Fisher Scientific Inc.
Copyright © 2021 Thermo Fisher Scientific Inc. All rights reserved.