How to process Sequoia imagery - PIX4Dmapper

Follow

Imagery captured with Parrot Sequoia camera is automatically recognized by the software since the camera is in Pix4Dmapper camera database.

 
Important: Multispectral and RGB images of Sequoia should be processed in two different projects on Pix4Dmapper.
   

Processing Sequoia multispectral imagery

1. Create a new project: Step 2. Creating a Project.
2. Import all discrete band imagery (green, red, rededge and NIR imagery) from the flight. This includes the images of the calibration target.
3. Choose the Ag Multispectral processing template: Processing options.
4. The radiometric calibration target images can be automatically detected and used by the software in certain conditions. For more information: Radiometric calibration target.
5. If the radiometric calibration target data is not detected automatically, this process can still be done manually. For more information: Radiometric calibration target.
6. Select the radiometric correction options corresponding to the device configuration you used to fly. For more information: Radiometric corrections.
7. Start processing.

Processing Sequoia RGB images

1. Create a new project: Step 2. Creating a Project.
2. Import all the RGB images in the same project.
3. If the flight plan is linear, ensure the Linear Rolling Shutter model is selected: How to correct for the Rolling Shutter Effect.
4. Choose the Ag RGB processing template: Processing Options Default Templates.
5. Start processing.

Processing Sequoia+ with its targetless radiometric calibration

 
Note: Targetless calibration with Parrot Sequoia+ is now available, starting with Pix4Dmapper 4.2.25.

The usage of a radiometric calibration target is not necessary with the targetless radiometric calibration of Parrot's Sequoia+. However, in order to use the targetless radiometric calibration of Sequoia+, the file sequoia_therm.dat is required. This file is generated automatically for each flight, is unique to each Sequoia+ and can be found for each set of images in the folder where the images are stored during the acquisition mission.

 
Important: The file sequoia_therm.dat, is generated by Sequoia+ and must be stored in the same folder as the original images.

To process a project with Sequoia+:

1. Create a new Pix4Dmapper project: Step 2. Creating a Project.
2. Import your multispectral images, i.e. Green, Red, Red Edge, and NIR.
3. Click Next.
4. Select the Ag Multispectral processing template: Processing Options Default Templates.
5. Select the radiometric correction according to the lighting conditions when the images were captured: Menu Process > Processing Options... > 3. DSM, Orthomosaic and Index > Index Calculator.

5.1. Camera, Sun Irradiance and Sun angle for clear skies
5.2. Camera and Sun Irradiance for overcast skies.

6. Start processing.

 
Information: Pix4D Cloud does not support targetless radiometric calibration at this time.
Was this article helpful?
10 out of 21 found this helpful

Article feedback (for troubleshooting, post here instead)

4 comments

  • Giorgi Lebanidze

    Hello,
    I have the following situation: my project is about riparian forest research along a river using the SEQUOIA multi-spectral camera + sunshine sensor (on eBee classic). I divided the study area into blocks, and within each blocks I had several flight missions - mainly as the drone's battery endured. So, the steps were: radiometric calibration > starting flight n1 > auto landing once battery law > changing battery > radiometric calibration again > launching flight n2 (resuming the previous mission) and so on. As a result, I have blocks with 3,4 and even 5 flights (each with radiometric calibration imagery).
    Processing the project with the Pix4D I have the following questions:

    For multi-spectral processing project:

    1. If I decide to process all the flight within the same block, would that give me reliable data? To consider sunshine and amount of light, I check the time difference between taking the first photo of the first flight of a block and the last photo of the last flight within the same block (i.e. time it took to accomplish all flights of the given block) and if it is 1.5 - 2 hours, I process them together and for calibration (on the step3) I choose images from middle flight (so that the calibration images are more or less in the same distance [timewise] from the first and the last images of the block, which is about 1 hour). Would this be correct way of processing?
    2. What is, so to say, timewise validity of the calibration images? i.e. when I calibrate the camera and launch the drone, what is the advised period of taking pictures where the calibration image can be applied in processing? to put it other way, maximum how distant (in time) should be flights in order to process them jointly using calibration pics of one flights? hope I put it clear.

    For RGB processing:

    3. Can I process all flights of the same day together? (as no multi-spectral calibration happening here). Or, better to process flights close to each-other (in terms of time)??

    Awaiting for your reply.

    Thank you for your support.

    kind regards,
    Giorgi   

    Edited by Giorgi Lebanidze
  • Avatar
    Momtanu (Pix4D)

    Hi Giorgi,

    1. That would be the correct way of processing. However, if you want very accurate radiometric correction, you can also process each flight dataset separately with the target for that flight and then merge in QGIS.

    2. It all depends on the sun and weather conditions. The sunlight might vary even in a small amount of time. However, the DLS sensor is recording that already, so if you see there is no drastic change of sunlight, you can use your workflow.

    3. You can process the RGB images together, but the images from each flight might be detected as a separate camera. Sometimes, the camera serial number written in the image exif tags changes and the software will create several camera models according to that or when exterior conditions (weather, exposure) vary greatly during the flight. In any case, we do not expect this to negatively influence the reconstruction of your project.

    However, when having large multiples flight projects, we recommended to process them separately and then merge them. For merging projects, please follow the instructions here: https://support.pix4d.com/hc/en-us/articles/202558529#gsc.tab=0

    Please make sure when designing the different image acquisition plans, that:

    • Each plan captures the images with enough overlap.
    • There is enough overlap between 2 image acquisition plans (Figures 5 and 6 in the link below).
    • The different plans are taken as much as possible under the same conditions (sun direction, weather conditions)
    • The flight height should not be too different between the flights, as different height leads to different spatial resolution.
  • Giorgi Lebanidze

    Hi Momtanu,

    Thank you for your reply.

    I processed flight of the same block (and close to each other) together. but, as I want to have more precision, I am going to process each flights separately.

    However, as the report (after processing step 1) shows, the edges of the processed area has less image overlap and such areas are shown in red in the report. I suppose this means the edge area is less reliable and invalid for further calculation (and for that reason I plan flights so that it covers actually a bit larger area than I need). Processing flights separately results in separate area for each flight surrounded by the red (less image overlap) area. Comparing it with the area resulted after processing all flights together, I suppose red (i.e. invalid data) area is larger.

    I am attaching the illustration to make it clear what I mentioned above. The first image shows the situation when 3 flight were processed together and rest images are separately processed flights (n 1,2,3). When processed all flights together, areas where 1 borders 2 and 2 borders with 3 have enough image overlap (i.e. good quality data). However, the same areas are shown in red when these flight processed separately (where they are "edges"). 
    Is this aspect worth any concern? Does this mean that processing flights separately results in less valid data for further processing?


    Thank you.
    Giorgi

     

    Edited by Giorgi Lebanidze
  • Avatar
    Momtanu (Pix4D)

    Giorgi,

    Actually they are not invalid data, they will have value in each pixel. It just means there was less overlap in those areas, due to which one matching keypoint has been found in a very few images, so I would say the reconstruction would not be as accurate as the middle but it still would be usable. If there are Nan values you will just see them as holes in the map and also on this overlap graph.

    From the next time if you want more radiometric accuracy and want to process the flights separately, I would recommend keeping extra flight lines in all directions.

    Also, if the light conditions did not change as much, you can use only one panel (the middle one) for the three flights together as the DLS sensor is already measuring the small amount of changes accurately.

    You will have to check for yourself, which method is giving more accurate data in your case.

Article is closed for comments.