Menu Process > Processing Options... > 1. Initial Processing > General

Index > Interface > Menu Process  Previous  |  Next 

 

 
Access: On the Menu bar, click Process > Processing Options..., the Processing Options pop-up appears. Click 1. Initial Processing. By default, only the General tab appears. On the bottom left, select the Advanced box to display all the tabs.

 


1. Initial Processing

2. Point Cloud and Mesh

3. DSM, Orthomosaic and Index

Resources and Notifications

 

General Matching Calibration  

Allows the user to change the processing options and to select what the Quality Report will display. It contains 2 sections:

 

 

Keypoints Image Scale

Allows to define the image size at which the keypoints are extracted in comparison to the initial size of the images. It is possible to select:

 
Information: The keypoints are computed on multiple image scales, starting with the chosen scale from the Keypoint Image Scale drop down list and going to the 1/8 scale. For example, if 1/2 (half image size) is selected, the keypoints are computed on images with half, quarter, and eighth image size.
  • Full: It sets full Image Scale for precise results.
  • Rapid: It sets a lower Image Scale for fast results.
  • Custom: Allows the user to select the Image Scale. There are the following options:
    • Image Scale:
      • 1 (Original image size): This is the recommended Image Scale.
      • 2 (Double image size): For small images (e.g. 640x320 pixels), a scale of 2 (double image size) should be used. More features will be extracted and this will have a positive impact on the accuracy of the results.
      • 1/2 (Half image size): For large projects with high overlap, a scale of 1/2 (half image size) can be used to speed up processing. This will, usually, result in a slightly reduced accuracy because fewer features will be extracted. This scale is also recommended for blurry or low textured images, as it usually results in better outputs than the full scale for such images.
      • 1/4 (Quarter image size): For very large projects with high overlap, a scale of 1/4 (quarter image size) can be used to speed up processing. This will, usually, result in a slightly reduced accuracy because fewer features will be extracted. This scale is also recommended for very blurry or very low textured images, as it usually results in better outputs than the full scale for such images.
      • 1/8 (Eighth image size): For very large projects with high overlap, a scale of 1/8 (eighth image size) can be used to speed up processing. This will, usually, result in a slightly reduced accuracy becasue fewer features will be extracted. 
 
Important: The default keypoints image scale can vary between different projects as it depends on the image resolution.
For full processing:
  • IF the image resolution is < 2 MP the keypoints image scale is 2.
  • IF the image resolution is > 40 MP the keypoints image scale is 1/2.
  • IN ALL other cases the keypoints image scale is 1.

For rapid processing:
  • IF the image resolution is < 2 MP the keypoints image scale is 1.
  • IF the image resolution is > 40 MP the keypoints image scale is 1/8.
  • IN ALL other cases the keypoints image scale is 1/4.
 
Tip: When processing datasets of flat and homogenous or repetitive and complex areas (trees, forest, fields) some images might not get calibrated. Processing with lower Keypoint Image Scale can lead to a higher number of calibrated images than the default original keypoint image scale.

 

Quality Report

Allows the user to select what the Quality Report will display.

  • Generate Orthomosaic Preview in Quality Report: The Quality Report will display a low resolution DSM and Orthomosaic. To display these elements, the Quality Report generation takes longer. Disabling this option, the Quality Report generation is faster.
 
Important: The low resolution DSM is generated using only the Automatic Tie Points. The low resolution Orthomosaic is generated based on this DSM. Both outputs are expected to be of low quality and should not be used for further analysis.
 
Note: When processing images that belong to different groups, all images are processed together, generating only one DSM for the whole project, but generating one Orthomosaic per group using the images associated to that group. For more information about the image groups: Menu Project > Image Properties Editor... > Images Table.

 

Index > Interface > Menu Process  Previous  |  Next 
Was this article helpful?
14 out of 16 found this helpful

Article feedback (for troubleshooting, post here)

6 comments

  • Paulo Neto

    Hello, how are you?

    I'd like to ask a question I have about Image Scale option. Well... I have a flight with GSD about 3.8 px/cm using Drone Anafi Work (~20MP). After processing step 1, the GSD may change a little bit.

    My question is how does Pix4D Mapper perform this Keypoint extraction and then reproject them?

    If I have a 20MP Drone what would the Keypoint extraction be:

    1) 2 (Double image size): 40MP?
    2) 1 (Original image size): 20MP?
    3) 1/2 (Half image size): 10MP?
    4) 1/4 (Quarter image size): 5MP?
    5) 1/8 (Eighth image size): 2.5MP?

    How does It reproject Tie Points using for example using the scale 1/2? Will it look each pixel of 10MP interesting then check the match with other images and reproject them? Is It the same thing on image scale on Point Cloud?

    Could you explain better to me? I`ve checked the video Processing options - Step 1 General Tab on Pix4D Online Courses and didn't understand the Corrections on video.

  • Beata (Pix4D)

    Hi Paulo ;-)

    Thank you for your questions.

    Unfortunately, the example you shared with us is not correct. 

    Let me explain that to you.

    Let's say you have 12MP image and you applied the 1/2 image scale.

    The feature extraction algorithm will work on each pixel within the 3MP picture, not on 6MP as we have to reduce the vertical and horizontal resolution by 2. 

    The Step 1 or Step 2 are two different processes, with different objectives and substeps. Thus, they work differently. However, the image scale has the same role (in both) but a different purpose. In the Step 2 image scale defines the scale of the images at which additional 3D points are computed.

    What is worth knowing is when the Multiscale option is enabled, and point cloud densification image scale is set to 1/2, the point cloud will be densified using 1/2, 1/4, and 1/8 image scales.

    I hope my reply helped you understand the process. Let me know if it is otherwise.

    Best! 

     

  • Paulo Neto

    Many thanks for the didactic answer! Now I get it!!

    What about the 3D reprojections on Step 1 (Tie Points / ATP) it will search homologous pixels in image pairs and reproject them, isn't it?

    And on Step 2 (additional 3D points) it will do the same thing but we must select the density of this reprojection (4 / image scale). Could you exemplify this densification using the example 12MP camera?

  • Beata (Pix4D)

    Hi Paulo,

    Step 1. 

    In the sake of simplicity, let's say that first, the software is extracting features going through all the images considering the applied image scale. The software is computing those keypoints and writing them to the file (to remember). After this substep comes the second one where the software is generating pairs and computing matches from the keypoints found during the first substep. Later, the software starts with the camera calibration during which we rematch those keypoints as well. Just after the camera calibration substep is finished, we colour ATP and load them in the rayCloud to display. If you would like to review the whole Step 1, I encourage you to analyse the .log file of your project. If you would like to know more about this process, I can contact you with our training team. Let me know if you are interested. 

    Step 2.

    Here, honestly, I'm afraid I don't understand your question. I can't give you any example as point cloud densification; how many points we will extract is based on processing options you apply and the content of the images. What I can add here is that the minimum number of matches is the number of images from which a 3D point is reprojected. In other words, it's about the minimum number of images on which the very same point is visible. 

    I hope it helps :-)

    Best

     

    Edited by Beata (Pix4D)
  • Paulo Neto

    I'm so thankful for that! Now It's much more clear for me!! ;)

  • Beata (Pix4D)

    Oh, great! I'm happy that I could help you! :) 

    Regards

Please sign in to leave a comment.