- There are discontinuities of thermal intensity between consecutive images. How can I fix it?
- Where can I see the temperature?
- Should I input the JPG images or the TIFF?
- Is it possible to process thermal movies?
- The processing takes a very long time! How can I reduce it?
- What are the advantages of merging an RGB and a thermal dataset?
- All the images in the rayCloud are either entirely white or entirely black, and the calibration rate is very low.
- I am using a custom integration of a thermal sensor. How can I integrate the metadata required by Pix4Dmapper?
If the temperature seems to drift with time, this is due to the characteristics of the camera (usually uncooled cameras exhibit this behavior) and this cannot be corrected by software. The camera provides an automated way to recalibrate the intensity, usually by taking a picture with the shutter closed. Check with the camera manufacturer for more details.
To obtain temperatures, a sensor that is able to provide absolute temperature is needed (instead of relative temperature). The FLIR Vue Pro and the Zenmuse XT do not provide absolute temperature. However, the FLIR Vue Pro and the Zenmuse XT both have a radiometric version that does record absolute temperature. It is recommended to do the processing with the uncompressed Tiff images and create the following index to view absolute temperature
0.04*thermal_ir - 273.15
The Thermomap camera from senseFly also records absolute temperature. The corresponding index is
0.01*thermal_ir - 100
This index is already present in the software and is loaded automatically for Thermomap projects.
It is possible to process the JPG thermal images, but this is not advised. The .jpg images are colored-mapped temperatures, and contain only a visual representation of the temperature instead of the raw values. In addition, the colored-mapped images are usually much harder to process than the original TIFF images. It is recommended to use grayscale images.
For the same reasons why TIFF imagery is preferable to JPG (see previous question), it is possible but not recommended. Moreover a movie is less likely to contain image geolocation and may suffer from an even higher compression rate. It may also contain areas with too much overlap.
There are two main factors that affect the speed of step 1. Initial Processing.
- Too much overlap: if some images in the project are taken from the same location, this will increase the processing time exponentially. It is advised to use a flight planning app (such as Pix4Dcapture) that triggers the camera based on distance instead of time. Alternatively, it is recommended to manually remove images if the drone was hovering at the same location for an extended period of time.
- Camera model optimization. If the camera’s initial values are too different from the optimized ones, it may slow down processing. Make sure that the pixel size and focal length are entered correctly: 202560169.
Thermal cameras usually have much lower resolution than RGB cameras, and thus the 3D model is of much lower quality. The idea is to use the higher resolution RGB images to compute a detailed 3D model, and to project the thermal texture on top of it. This greatly improves the final thermal 3D model.
This happens with thermal cameras that are not registered in our database. The preferred way is to send us a sample of the dataset such that we can include it in our database. Another way is to open the .p4d file with a text editor and add the line <pixelValue pixelType="uint16" min="0" max="0"/> below the <tangentialT2> and above the <cameraModelSource> line. This should be done before starting to process.
Make sure to follow this document listing all the EXIF tags read by Pix4Dmapper: 205732309.
Moreover, in the case of a custom integration, one should ensure that the response of the camera is linear and that the images are corrected for dark current.