Imagery captured with Parrot Sequoia camera is automatically recognized by the software since the camera is in Pix4Dmapper camera database.
Processing Sequoia multispectral imagery
1. Create a new project: Step 2. Creating a Project.
2. Import all discrete band imagery (green, red, rededge and NIR imagery) from the flight. This includes the images of the calibration target.
3. Choose the Ag Multispectral processing template: Processing options.
4. The radiometric calibration target images can be automatically detected and used by the software in certain conditions. For more information: Radiometric calibration target.
5. If the radiometric calibration target data is not detected automatically, this process can still be done manually. For more information: Radiometric calibration target.
6. Select the radiometric correction options corresponding to the device configuration you used to fly. For more information: Radiometric corrections.
7. Start processing.
Processing Sequoia RGB images
1. Create a new project: Step 2. Creating a Project.
2. Import all the RGB images in the same project.
3. If the flight plan is linear, ensure the Linear Rolling Shutter model is selected: How to correct for the Rolling Shutter Effect.
4. Choose the Ag RGB processing template: Processing Options Default Templates.
5. Start processing.
Processing Sequoia+ with its targetless radiometric calibration
The usage of a radiometric calibration target is not necessary with the targetless radiometric calibration of Parrot's Sequoia+. However, in order to use the targetless radiometric calibration of Sequoia+, the file sequoia_therm.dat
is required. This file is generated automatically for each flight, is unique to each Sequoia+ and can be found for each set of images in the folder where the images are stored during the acquisition mission.
To process a project with Sequoia+:
1. Create a new Pix4Dmapper project: Step 2. Creating a Project.
2. Import your multispectral images, i.e. Green, Red, Red Edge, and NIR.
3. Click Next.
4. Select the Ag Multispectral processing template: Processing Options Default Templates.
5. Select the radiometric correction according to the lighting conditions when the images were captured: Menu Process > Processing Options... > 3. DSM, Orthomosaic and Index > Index Calculator.
5.1. Camera, Sun Irradiance and Sun angle for clear skies
5.2. Camera and Sun Irradiance for overcast skies.
6. Start processing.
Hello,
I have the following situation: my project is about riparian forest research along a river using the SEQUOIA multi-spectral camera + sunshine sensor (on eBee classic). I divided the study area into blocks, and within each blocks I had several flight missions - mainly as the drone's battery endured. So, the steps were: radiometric calibration > starting flight n1 > auto landing once battery law > changing battery > radiometric calibration again > launching flight n2 (resuming the previous mission) and so on. As a result, I have blocks with 3,4 and even 5 flights (each with radiometric calibration imagery).
Processing the project with the Pix4D I have the following questions:
For multi-spectral processing project:
For RGB processing:
3. Can I process all flights of the same day together? (as no multi-spectral calibration happening here). Or, better to process flights close to each-other (in terms of time)??
Awaiting for your reply.
Thank you for your support.
kind regards,
Giorgi
Hi Giorgi,
1. That would be the correct way of processing. However, if you want very accurate radiometric correction, you can also process each flight dataset separately with the target for that flight and then merge in QGIS.
2. It all depends on the sun and weather conditions. The sunlight might vary even in a small amount of time. However, the DLS sensor is recording that already, so if you see there is no drastic change of sunlight, you can use your workflow.
3. You can process the RGB images together, but the images from each flight might be detected as a separate camera. Sometimes, the camera serial number written in the image exif tags changes and the software will create several camera models according to that or when exterior conditions (weather, exposure) vary greatly during the flight. In any case, we do not expect this to negatively influence the reconstruction of your project.
However, when having large multiples flight projects, we recommended to process them separately and then merge them. For merging projects, please follow the instructions here: https://support.pix4d.com/hc/en-us/articles/202558529#gsc.tab=0
Please make sure when designing the different image acquisition plans, that:
Hi Momtanu,
Thank you for your reply.
I processed flight of the same block (and close to each other) together. but, as I want to have more precision, I am going to process each flights separately.
However, as the report (after processing step 1) shows, the edges of the processed area has less image overlap and such areas are shown in red in the report. I suppose this means the edge area is less reliable and invalid for further calculation (and for that reason I plan flights so that it covers actually a bit larger area than I need). Processing flights separately results in separate area for each flight surrounded by the red (less image overlap) area. Comparing it with the area resulted after processing all flights together, I suppose red (i.e. invalid data) area is larger.
I am attaching the illustration to make it clear what I mentioned above. The first image shows the situation when 3 flight were processed together and rest images are separately processed flights (n 1,2,3). When processed all flights together, areas where 1 borders 2 and 2 borders with 3 have enough image overlap (i.e. good quality data). However, the same areas are shown in red when these flight processed separately (where they are "edges").
Is this aspect worth any concern? Does this mean that processing flights separately results in less valid data for further processing?
Thank you.
Giorgi
Giorgi,
Actually they are not invalid data, they will have value in each pixel. It just means there was less overlap in those areas, due to which one matching keypoint has been found in a very few images, so I would say the reconstruction would not be as accurate as the middle but it still would be usable. If there are Nan values you will just see them as holes in the map and also on this overlap graph.
From the next time if you want more radiometric accuracy and want to process the flights separately, I would recommend keeping extra flight lines in all directions.
Also, if the light conditions did not change as much, you can use only one panel (the middle one) for the three flights together as the DLS sensor is already measuring the small amount of changes accurately.
You will have to check for yourself, which method is giving more accurate data in your case.