Quality Report Specifications

 

 
Important: 
  • For a detailed description about how to analyze the Quality Report: 202558689.
  • For a description about how to analyze the Quality Report: 202557339.
  • Example of a Quality Report available at the following link: Quality Report.

The Quality Report is automatically displayed after each step of processing. To not be displayed automatically, unselect the Display Automatically after Processing box at the bottom of the Quality Report.

After Step 1. Initial Processing processing failed:

After Step 1. Initial Processing is completed:

After Step 2. Point Cloud and Mesh is completed:

After Step 3. DSM, Orthomosaic and Index is completed:

Processing Failed
Error: Description of the error that made processing fail.
Substep: The substep of Initial Processing at which the processing fails.
Cause: Description of the possible causes of the failure.
Solutions: Description of the possible solutions with a link to step by step instructions.

 

Summary
Project: Name of the project.
Processed: Date and time of processing.
Camera Model Name(s) The name of the camera model(s) used to capture the images.
Rig Name (s) The name of the rigs used to capture the images. If a rig is detected, all the cameras of the rig will appear in the Camera Model Name above.
Average Ground Sampling Distance (GSD): The average GSD of the initial images. For more information about the GSD: 202559809.
Area Covered: The 2D area covered by the project. This area is not affected if a smaller Processing Area has been drawn.
Time for Initial Processing (without report): The time for Initial Processing without taking into account the time needed for the generation of the Quality Report.

 

Quality Check
Images: The median of keypoints per image. Keypoints are characteristic points that can be detected on the images.
ok.png Keypoints Image Scale > 1/4: More than 10'000 keypoints have been extracted per image.
Keypoints Image Scale ≤ 1/4: More than 1'000 keypoints have been extracted per image.

Keypoints Image Scale > 1/4: Between 500 and 10'000 keypoints have been extracted per image.
Keypoints Image Scale ≤ 1/4: Between 200 and 1'000 keypoints have been extracted per image.

Keypoints Image Scale > 1/4: Less than 500 keypoints have been extracted per image.
Keypoints Image Scale ≤ 1/4: Less than 200 keypoints have been extracted per image.
Failed Processing Report: displayed if the information is not available.

Dataset: Number of enabled images that have been calibrated, i.e. the number of images that have been used for the reconstruction of the model. If the reconstruction results in more than one block, the number of blocks is displayed. This section also shows the number of images that have been disabled by the user.

If processing fails, the number of enabled images is displayed.

ok.png More than 95% of enabled images are calibrated in one block.
Between 60% and 95% of enabled images are calibrated, or more than 95% of enabled images are calibrated in multiple block.
Less than 60% of enabled images are calibrated.
Failed Processing Report: always displayed as the information is not available.

Camera Optimization: Perspective lens: The percentage of difference between initial and optimized focal length.
Fisheye lens: The percentage of difference between the initial and optimized affine transformation parameters C and F.
ok.png Perspective lens: The percentage of difference between initial and optimized focal length is less than 5%.
Fisheye lens: The percentage of difference between initial and optimized affine transformation parameters C and F is less than 5%.
Perspective lens: The percentage of difference between initial and optimized focal length is between 5% and 20%.
Fisheye lens: The percentage of difference between initial and optimized affine transformation parameters C and F is between 5% and 20%.
Perspective lens: The percentage of difference between initial and optimized focal length is more than 20%.
Fisheye lens: The percentage of difference between initial and optimized affine transformation parameters C and F is more than 20%.
Failed Processing Report: always displayed as the information is not available.

Matching: The median of matches per calibrated image.
ok.png Keypoints Image Scale > 1/4: More than 1'000 matches have been computed per calibrated image.
Keypoints Image Scale ≤ 1/4: More than 100 matches have been computed per calibrated image.

Keypoints Image Scale > 1/4: Between 100 and 1'000 matches have been computed per calibrated image.
Keypoints Image Scale ≤ 1/4: Between 50 and 100 matches have been computed per calibrated image.

Keypoints Image Scale > 1/4: Less than 100 matches have been computed per calibrated image.
Keypoints Image Scale ≤ 1/4: Less than 50 matches have been computed per calibrated image.
Failed Processing Report: displayed if the information is not available.

Georeferencing: Displays if the project is georeferenced or not.

If it is georeferenced, it displays what has been used to georeference the project:
  • If site calibration transformation has been used, site calibration is displayed.
  • If only the image geolocation has been used, no GCP is displayed.
  • if GCPs are used, the number, type and the mean of the RMS error in (X,Y,Z) is displayed.

If processing fails, the number of GCPs defined in the project is displayed.

ok.png The project is georeferenced:
  • Using a site calibration
  • Using GCPs and the GCP error is less than 2 times the average GSD.
GCPs are used and the GCP error is more than 2 times the average GSD.
or
No GCPs are used.

Failed Processing Report: always displayed whether GCPs are used or not.
GCPs are used and the GCP error is more than 4 times the average GSD.

 

Preview
Figure 1: Orthomosaic and the corresponding sparse Digital Surface Model (DSM) before densification.

 

Calibration Details
Number of Calibrated Images: Number of the images that have been calibrated, i.e. the number of images that have been used for the reconstruction, with respect to the total number of the images in the project (enabled and disabled images)
Number of Geolocated Images: Number of the images that are geolocated.

 

Initial Image Positions
Figure 2: Top view of the initial image position. The green line follows the position of the images in time starting from the large blue dot.

 

Computed Image/GCP/Manual Tie Points Positions

Figure 3: Offset between initial (blue dots) and computed (green dots) image positions as well as the offset between the GCPs initial positions (blue crosses) and their computed positions (green crosses) in the top-view (XY plane), front-view (XZ plane), and side-view (YZ plane). Dark green ellipses indicate the absolute position uncertainty (Nx magnified) of the bundle block adjustment result. 

 

Absolute Camera Position and Orientation Uncertainties
Mean X/Y/Z: Mean uncertainty in the X/Y/Z direction of the absolute camera positions.
Mean Omega/Phi/Kappa: Mean uncertainty in the the omega/phi/kappa orientation angle of the absolute camera positions. 
Mean Camera Displacement X/Y/Z: Only available for projects processed with Linear Rolling Shutter. Mean uncertainty in the camera displacement in the X/Y/Z direction of the absolute camera positions.
Sigma X/Y/Z: Sigma of the uncertainties in the X/Y/Z direction of the absolute camera positions.
Sigma Omega/Phi/Kappa: Sigma of the uncertainties in the omega/phi/kappa angle of the absolute camera positions.
Sigma Camera Displacement X/Y/Z: Only available for projects processed with Linear Rolling Shutter. Sigma of the uncertainties in the camera displacement in the X/Y/Z direction of the absolute camera positions.

 

Overlap
Figure 4: Number of overlapping images computed for each pixel of the orthomosaic.
Red and yellow areas indicate low overlap for which poor results may be generated. Green areas indicate an overlap over 5 images for every pixel. Good quality results will be generated as long as the number of keypoint matches is also sufficient for these areas (see Figure 5 for keypoint matches).

 

Bundle Block Adjustment Details
Number of 2D Keypoint Observations for Bundle Block Adjustment:      The number of automatic tie points on all images that are used for the AAT/BBA. It corresponds to the number of all keypoints (characteristic points) that could be matched on at least two images.
Number of 3D Points for Bundle Block Adjustment: The number of all 3D points that have been generated by matching 2D points on the images.
Mean Reprojection Error [pixels]: The average of the reprojection error in pixels.

Each computed 3D point has initially been detected on the images (2D keypoint). On each image the detected 2D keypoint has a specific position. When the computed 3D point is projected back to the images it has a re-projected position. The distance between the initial position and the re-projected one gives the re-projection error. For more information: 202559369.

 

Internal Camera Parameters for Perspective Lens
Icon + camera model name + sensor dimensions The icon shows the source of the camera model (database.png software database, pencil.png edited camera model from the software database, user.png user database, document.png project file, camera.png EXIF data)

The camera model name is also displayed as well as the sensor dimensions.

EXIF ID: The EXIF ID to which the camera model is associated.
 
Initial Values: The initial values of the camera model.
Optimized Values: The optimized values that are computed from the camera calibration and that are used for processing.
Uncertainties (sigma): The sigma of the uncertainties of the focal length, the Principal Point X, the Principal Point Y, the Radial Distortions R1, R2 and the Tangential Distortions T1, T2.
Focal Length: The focal length of the camera in pixels and in millimeters. If the sensor size is the real one, then the focal length should be the real one. If the sensor size is given as 36 x 24 mm, then the focal length should be the 35mm equivalent focal length.
Principal Point x: The x image coordinate of the principal point in pixels and in millimeters. The principal point is located around the center of the image. The coordinate system has its origin as displayed here:

 

Principal Point y: The y image coordinate of the principal point in pixels and in millimeters. The principal point is located around the center of the image. The coordinate system has its origin as displayed here:

R1: Radial distortion of the lens R1. 
R2: Radial distortion of the lens R2.
R3: Radial distortion of the lens R3.
T1: Tangential distortion of the lens T1.
T2: Tangential distortion of the lens T2.
Residual Lens Error: This figure displays the residual lens error. The number of Automatic Tie Points (ATPs) per pixel averaged over all images of the camera model is color coded between black and white. White indicates that, in average, more than 16 ATPs are extracted at this pixel location. Black indicates that, in average, 0 ATP has been extracted at this pixel location. Click on the image to the see the average direction and magnitude of the reprojection error for each pixel. Note that the vectors are scaled for better visualization.

 

Internal Camera Parameters for Fisheye lens
Icon + camera model name + sensor dimensions The icon shows the source of the camera model (database.png software database, pencil.png edited camera model from the software database, user.png user database, document.png project file, camera.png EXIF data)

The camera model name is also displayed as well as the sensor dimensions.

EXIF ID: The EXIF ID to which the camera model is associated.
 
Initial Values: The initial values of the camera model.
Optimized Values: The optimized values that are computed from the camera calibration and that are used for processing.
Uncertainties (Sigma): The sigma of the uncertainties of the Polynomial Coefficient 1,2,3,4 and the Affine Transformation parameters C,D,E,F.
Poly[0]: Polynomial coefficient 1
Poly[1]: Polynomial coefficient 2
Poly[2]: Polynomial coefficient 3
Poly[3]: Polynomial coefficient 4
c: Affine transformation C
d: Affine transformation D
e: Affine transformation E
f: Affine transformation F
Principal Point x: The x image coordinate of the principal point in pixels. The principal point is located around the center of the image. The coordinate system has its origin as displayed here:

 

Principal Point y: The y image coordinate of the principal point in pixels. The principal point is located around the center of the image. The coordinate system has its origin as displayed here:

Residual Lens Error: This figure displays the residual lens error. The number of Automatic Tie Points (ATPs) per pixel averaged over all images of the camera model is color coded between black and white. White indicates that, in average, more than 16 ATPs are extracted at this pixel location. Black indicates that, in average, 0 ATP has been extracted at this pixel location. Click on the image to the see the average direction and magnitude of the reprojection error for each pixel. Note that the vectors are scaled for better visualization.

 

Internal Camera Parameters Correlation
correlation_params.jpg
The correlation between camera internal parameters determined by the bundle adjustment. The correlation matrix displays how much the internal parameters compensate for each other.
White indicates a full correlation between the parameters, i.e. any change in one can be fully compensated by the other. Black indicates that the parameter is completely independent, and is not affected by other parameters.

 

2D Keypoints Table
Number of 2D Keypoints per Image: Number of 2D keypoints (characteristic points) per image.
Number of Matched 2D Keypoints per Image: Number of matched 2D keypoints per image. A matched point is a characteristic point that has initially been detected on at least two images (a 2D keypoint on these images) and has been identified to be the same characteristic point.
Median:  The median number of the above mentioned keypoints per image.
Min:  The minimum number of the above mentioned keypoints per image.
Max:  The maximum number of the above mentioned keypoints per image.
Mean:  The mean / average number of the above mentioned keypoints per image.

 

2D Keypoints Table for Camera
camera model name If more than one camera model is used, the number of 2D keypoints found on images associated to a given camera model name is displayed.
Number of 2D Keypoints per Image: Number of 2D keypoints (characteristic points) per image.
Number of Matched 2D Keypoints per Image: Number of matched 2D keypoints per image. A matched point is a characteristic point that has initially been detected on at least two images (a 2D keypoint on these images) and has been identified to be the same characteristic point.
Median:  The median number of the above mentioned keypoints per image.
Min:  The minimum number of the above mentioned keypoints per image.
Max:  The maximum number of the above mentioned keypoints per image.
Mean:  The mean / average number of the above mentioned keypoints per image.

 

Median / 75% / Maximal Number of Matches Between Camera Models
 
Median / 75% / Maximum: The median, 75% (upper quartile), maximum number of matches between two camera models. If a cell is empty, no matches have been computed between the corresponding cameras.

 

3D Points from 2D Keypoint Matches
Number of 3D Points Observed in N Images:

Each 3D point is generated from keypoints that have been observed on at least two images. Each row of this table displays the number of 3D points that have been observed in N images. The higher the image number on which a 3D point is visible, the higher its accuracy is.

 

2D Keypoint Matches

Figure 5: Top view of the image computed positions with a link between matching images. The darkness of the links indicates the number of matched 2D keypoints between the images. Bright links indicate weak links and require Manual Tie Points or more images. Dark green ellipses indicate the relative camera position uncertainty (Nx magnified) of the bundle block adjustment result.

The 2D Keypoint Matches graph displays each block with a different color (green and yellow in the following example): 

 

 

Relative Camera Position and Orientation Uncertainties
Mean X/Y/Z: Mean uncertainty in the X/Y/Z direction of the relative camera positions.
Mean Omega/Phi/Kappa:  Mean uncertainty in the the omega/phi/kappa orientation angle of the relative camera positions.
Mean Camera Displacement X/Y/Z: Only available for projects processed with Linear Rolling Shutter. Mean uncertainty in the camera displacement in the X/Y/Z direction of the relative camera positions.
Sigma X/Y/Z: Sigma of the uncertainties in the X/Y/Z direction of the relative camera positions.
Sigma Omega/Phi/Kappa: Sigma of the uncertainties in the omega/phi/kappa angle of the relative camera positions.
Sigma Camera Displacement X/Y/Z: Only available for projects processed with Linear Rolling Shutter. Sigma of the uncertainties in the camera displacement in the X/Y/Z direction of the relative camera positions.

 

Manual Tie Points
 
MTP Name: The name of the Manual Tie Point.
Projection Error [pixel]: Average distance in the images where the Manual Tie Point has been marked and reprojected.
Verified/Marked: Verified: The number of images on which the Manual Tie Point has been marked and are taken into account for the reconstruction.
Marked: The images on which the Manual Tie Point has been marked.

 


Ground Control Points
GCP Name: The name of the GCP together with the GCP type. The type can be:
  • 3D GCP
  • 2D GCP
Check Point Name: The name of the Check Point.
Accuracy XY / Z [m]:

Accuracy XY / Z [ft]:

The Accuracy XY of the GCP/Check Point that has been given by the user in XY direction / The Accuracy Z of the GCP/Check Point that has been given by the user in Z direction.
The accuracy indicates the accuracy of the GCP/Check Point in each direction.

Error X [m]:

Error X [ft]:

The difference between the computed GCP/Check Point 3D point and the original position in X direction (original position - computed position).

Error Y [m]:

Error Y [ft]:

The difference between the computed GCP/Check Point 3D point and the original position in Y direction (original position - computed position).

Error Z [m]:

Error Z [ft]:

The difference between the computed GCP/Check Point 3D point and the original position in Z direction (original position - computed position).
Projection Error [pixel]: Average distance in the images where the GCP/Check Point has been marked and where it has been reprojected.
Verified/Marked: Verified: The number of images on which the GCP/Check Point has been marked and are taken into account for the reconstruction.
Marked: The images on which the GCP/Check Point has been marked.

Mean [m]:

Mean [ft]:

The mean / average error in each direction (X,Y,Z). For more information: 203604125.

Sigma [m]:

Sigma [ft]:

The standard deviation of the error in each direction (X,Y,Z). For more information: 203604125.

RMS Error [m]:

RMS Error [ft]

The Root Mean Square error in each direction (X,Y,Z). For more information: 203604125.

 

Scale Constraints
Scale Name: Name of the Scale Constraint.
Initial Length [m]:

Initial Length [ft]:
Length measured in the field representing the real length of the scale constraint.
Initial Length Accuracy [m]:

Initial Length Accuracy [ft]:
Accuracy of the measurements in the field.
Computed Length [m]:

Computed Length [ft]:
Length measured in the 3D model. 
Computed Length Error[m]:

Computed Length Error[ft]:
The  Computed Length Error is given by the difference between the Computed Length and the Initial Length.
GCP/MTP Label 1: Label of the first manual tie point associated to the Scale Constraint.
GCP/MTP Label 2: Label of the second Manual Tie Point associated to the Scale Constraint.
Mean [m]:

Mean [ft]:
The mean / average Computed Length Error.
Sigma [m]:

Sigma [ft]:
The standard deviation of the Computed Length Error.

 

Orientation Constraints
Orientation Name: Name of the Orientation Constraint.
Axis: Name of the axis that the Orientation Constraint represents.
Angular Accuracy [degree]: Angular accuracy of the measurements in the field.
Computed Angular Error [degree] Angular difference between the computed axis and the axis that was drawn.
GCP/MTP Label 1: Label of the first Manual Tie Point associated to the Orientation Constraint.
GCP/MTP Label 2: Label of the second manual tie point associated to the Orientation Constraint.
Mean [degree]: The mean / average Computed Angular Error.
Sigma [degree]: The standard deviation of the Computed Angular Error.

 

Absolute Geolocation Variance

Number of geolocated and calibrated images that have been labeled as inaccurate. The input coordinates for these images are considered as inaccurate. Pix4Dmapper managed to find their correct optimized position but they are not taken into account for the following Geolocation Variance tables.

Min Error [m] / Max Error [m]:

Min Error [ft] / Max Error [ft]:

The minimum and maximum error represent the geolocation error intervals between -1.5 and 1.5 times the maximum accuracy (of all X,Y,Z directions) of all the images.
Geolocation Error X [%]:

The percentage of images with geolocation errors in X direction within the predefined error intervals. The geolocation error is the difference between the camera initial geolocations and the their computed positions.

Geolocation error Y [%]: The percentage of images with geolocation errors in Y direction within the predefined error intervals. The geolocation error is the difference between the camera initial geolocations and the their computed positions.
Geolocation error Z [%]: The percentage of images with geolocation errors in Z direction within the predefined error intervals. The geolocation error is the difference between the camera initial geolocations and the their computed positions.
Mean: The mean / average error in each direction (X,Y,Z).
Sigma: The standard deviation of the error in each direction (X,Y,Z).
RMS error: The Root Mean Square error  in each direction (X,Y,Z).

 

Geolocation Bias

This table is displayed only if GCPs are used in the project. It defines the bias between image initial and computed geolocation given in the output coordinate system.

Translation [m]:

Translation [ft]:
Translation between initial and computed image position in the output coordinate system.
Rotation [degree]: Rotation between initial and computed image position in the output coordinate system.
Displayed only if the output coordinate system is an arbitrary coordinate system.
Scale: Scale between initial and computed image position in the output coordinate system.
Displayed only if the output coordinate system is an arbitrary coordinate system.

 

Image Orientation Variance
 
Geolocation orientation variance: Root Mean Square RMS error of the image orientation angles. The difference between the initial image orientation angles and the computed orientation angles.
Omega: The RMS error in Omega angle in degrees.
Phi: The RMS error in Phi angle in degrees.
Kappa: The RMS error in Kappa angle in degrees.

 

Geolocation Coordinate System Transformation
Transformation of The Geolocation:

This table is displayed only if a Site Calibration Transformation is defined and enabled and if the output coordinate system is an arbitrary coordinate system. It defines the transformation from the projection of the site calibration to the output coordinate system. It can be used in projects where the images are in a known coordinate system and no GCPs are used in order to define the transformation to an arbitrary output coordinate system.

Translation [m]:

Translation [ft]

Translation from the projection of the site calibration to the arbitrary output coordinate system. The X, Y, Z axes are defined in the output coordinate system.
Rotation [degree]: Rotation from the projection of the site calibration to the arbitrary output coordinate system. The X, Y, Z axes are defined in the output coordinate system.
Scale: Scale ratio between the projection of the site calibration and the arbitrary output coordinate system. The X, Y, Z axes are defined in the output coordinate system.
Site Calibration Projection:

Displayed in the table's caption. The transformation is the transformation from the projection system of the site calibration to an arbitrary output coordinate system.

 

Relative Geolocation Variance
Relative Geolocation Error:

The relative geolocation error for each direction is computed as follows:

  • Rx = (Xi - Xc)/Ax
  • Ry = (Yi - Yc)/Ay
  • Rz = (Zi - Zc)/Az


Where

  • Rx, Ry, Rz = relative geolocation error in X, Y, Z
  • Xi, Yi, Zi = initial image position in X, Y, Z (GPS position)
  • Xc, Yc, Zc = computed image position in X, Y, Z
  • Ax, Ay, Az = image geolocation accuracy (set by the user or taken from RTK accuracy) in X, Y, Z

The goal is to verify if the relative geolocation error follows a Gaussian distribution.

If it does:

  • 68.2% of the geolocated and calibrated images should have a relative geolocation error in X, Y, Z between -1 and 1.
  • 95.4% of the geolocated and calibrated images should have a relative geolocation error in X, Y, Z between -2 and 2.
  • 99.6% of the geolocated and calibrated images should have a relative geolocation error in X, Y, Z between -3 and 3.

 

Images X[%]: The percentage of geolocated and calibrated images with a relative geolocation error in X of one time, two times and 3 times the image geolocation accuracy. 
Images Y[%]:
The percentage of geolocated and calibrated images with a relative geolocation error in Y of one time, two times and 3 times the image geolocation accuracy. 
Images Z[%]
The percentage of geolocated and calibrated images with a relative geolocation error in Z of one time, two times and 3 times the image geolocation accuracy. 
Mean of Geolocation Accuracy [m]:

Mean of Geolocation Accuracy [ft]:
The mean / average accuracy in each direction (X,Y,Z).
Sigma of Geolocation Accuracy [m]:

Sigma of Geolocation Accuracy [ft]:
The standard deviation of the accuracy in each direction (X,Y,Z).

 

Rolling Shutter Statistics
Figure 6: Movement estimated by the rolling shutter camera model. The green line follows the computed image positions. The blue dots represent the camera position at the start of the exposure. The blue lines represent the camera motion during the rolling shutter readout, re-scaled by a project dependant scaling factor for better visibility.

Median camera speed: The median speed of the drone while taking the images.
Median rolling shutter displacement (during sensor readout): The median rolling shutter displacement of the camera while taking the image (readout).
Median rolling shutter time: The median time of taking an image.

 

 Initial Processing Details

 

System Information
Hardware: CPU, RAM and GPU for processing.
Operating System: Operating System used for processing.

 

Coordinate Systems
Image Coordinate System Coordinate system of the image geolocation.
Ground Control Point (GCP) Coordinate System Coordinate system of the GCPs, if GCPs are used.
Output Coordinate System Output coordinate system of the project

 

Processing Options
Detected Template: Processing Option Template, if a template has been used.
Keypoints Image Scale: The image scale at which keypoints are computed. The scale can be chosen in 3 different ways:
  • Full: Automatically adjusts the Keypoints Image Scale for optimal results.
  • Rapid: Automatically adjusts the Keypoints Image Scale for fast results.
  • Custom: User selected Keypoints Image Scale.

The following image scales can be selected:

  • Image Scale: 1: Original image size.
  • Image Scale: 2: Double image size.
  • Image Scale: 0.5: Half image size.
  • Image Scale: 0.25: Quarter image size.
  • Image Scale: 0.125: Eighth image size.
Advanced: Matching Image Pairs: Defines how to select which image pairs to match. There are 3 different ways to select them:
  • Aerial Grid or Corridor: Optimizes the pairs matching for Aerial Grid or Corridor flight paths. 
  • Free Flight or Terrestrial: Optimizes the pairs matching for Free Flight paths or Terrestrial images.
  • Custom: The pairs matching parameters are selected by the user. Useful in specific projects and for advanced users only. Suggested if one of the options above does not provide the desired results.
    • Use Capture Time: Matches images considering the time on which they were taken.
      • Number of Neighboring Images: How may images (before and after in time) are used for the pairs matching. 
    • Use Triangulation of Image Geolocation: Only available if the images have geolocation. Only useful for aerial flights. The geolocation position of the images is triangulated. Each image is then matched with images with which it is connected by a triangle.
    • Use Distance: Only available if the images have geolocation. Useful for oblique or terrestrial projects. Each image is matched with images withing a relative distance.
      • Relative Distance Between Consecutive Images: All the images within the mentioned distance will be used in the pairs matching. Using as one unit distance the average distance between images.
    • Use Image Similarity: Uses the image content for pairs matching. Matches the n images with most similar content.
      • Maximum Number of Pairs for Each Image Based on Similarity: Maximum number of image pairs with similar image content.
    • Use MTPs: Images connected via a shared Manual Tie Point will be matched.
      • Maximum Number of Image Pairs per MTP: Maximum number of image pairs connected by a given MTP.
    • Use Time for Multiple Cameras: When having multiple flights without geolocation using the same flight plan over the same area, and having different camera models for each flight, it matches the images from one flight with the ones from the other flight using the time information.
Advanced: Matching Strategy:  Images are matched either using or not the Geometrically Verified Matching.
Advanced: Keypoint Extraction:  Target number of keypoints to extract. The target number can be:
  • Automatic: The target number of keypoints is defined by the software.
  • Custom: Number of Keypoints: User defined number of keypoints to extract.
Advanced: Calibration:  Calibration parameters used:
  • Calibration Method: Calibration method used.
    • Standard: for the majority of the projects.
    • Alternative: Optimized for aerial nadir images with accurate geolocation and low texture content, for example, fields.
    • Accurate Geolocation and Orientation: Optimized for project with very accurate image geolocation and orientation.
  • Internal Parameters Optimization:
    • All: Optimizes all the internal camera parameters.
    • None:Does not optimize any of the internal camera parameters.
    • Leading: Optimizes the most important internal camera parameters.
    • All Prior: Forces the optimal internal parameters to be close to the initial values.
  • External Parameters Optimization:
    • All: Optimizes all the external camera parameters.
    • None: Does not optimize any of the external camera parameters.
    • Rotation: Optimizes only the orientation of the camera.
Advanced:Automatic Sky Masking Only available for Bebod 2 projects.
Rig Processing

Only available for rig projects.

 

 

Point Cloud Densification Details

Processing Options
quality_report_step2.png
Image Scale:

Image scale used for the point cloud densification:

  • 1 (Original image size, Slow)
  • 1/2 (Half image size, Default)
  • 1/4 (Quarter image size, Fast)
  • 1/8 (Eight image size, Tolerant)

Displays also if Multiscale is used.

Point Density: Point density of the densified point cloud. It can be:
  • High
  • Optimal
  • Low
Minimum Number of Matches: The minimum number of matches per 3D point represents the minimum number of valid re-projections of this 3D point on the images. It can be 2-6.
3D Textured Mesh Generation: Displays if the 3D Textured Mesh has been generated or not.
3D Textured Mesh Settings:

Displays the  Processing Settings for the 3D Textured Mesh generation.

Resolution: The selected the resolution for the 3D Textured Mesh generation. It can be: 

  • High Resolution
  • Medium Resolution
  • Low Resolution
  • Custom: If the custom option is selected, it displays: 
    • Resolution: Custom
    • Maximum Octree Depth: It can be between 5-20
    • Texture Size. It can be:
      • 256x256
      • 512x512
      • 1024x1024
      • 2048x2048
      • 4096x4096
      • 8192x 8192
      • 16384x16384
      • 32768x32768
      • 65563x65563
      • 131072x131072
    • Desimation Criteria: It can be: 
      • Quantitative
        • Maximum Number of Triangles, the number depends on the geometry and the size of the project.
      • Qualitative. It can be:
        • Sensitive
        • Aggressive

Color Balancing: It appears when the Color Balancing algorithm is selected for the generation of the texture of the 3D Texture Mesh.

LOD

Generated: It can be yes or no.

Advanced: 3D Textured Mesh Settings:

Sample Density Divider: It can be between 1-5.

Advanced: Matching Window Size: Size of the grid used to match the densified points in the original images.
Advanced: Image Groups: Image groups for which a densified point cloud has been generated. One densified point cloud is generated per group of images.
Advanced: Use Processing Area: Displays if the Processing Area is taken into account or not.
Advanced: Use Annotations: If annotations are taken into account or not, as selected in the processing options for step 2. Point Cloud and Mesh.
Advanced: Limit Camera Depth Automatically: Displays if the camera depth is automatically limited or not.
Time for Point Cloud Densification: Time spent to generate the densified point cloud.
Time for Point Cloud Classification: Time spent to generate the classified point cloud.
Time for 3D Textured Mesh Generation: Time spent to generate the 3D Textured Mesh. Displays NA if no 3D Texture Mesh has been generated.

 

Results
Number of Processed Clusters: Displays the number of clusters generated, if more than 1 cluster has been generated.
Number of Generated Tiles: Displays the number of tiles generated for the densified point cloud.
Number of 3D Densified Points: Total number of 3D densified points obtained for the project.
Average Density (per m3): Average number of 3D densified points obtained for the project per cubic meter.

 

DSM, Orthomosaic and Index Details

Processing Options
step3.jpg
DSM and Orthomosaic Resolution: Resolution used to generate the DSM and Orthomosaic. If the mean GSD computed at step 1. Initial Processing is used, its value is displayed.
DSM Filters:

Displays if the Noise Filtering is used as well as the Surface Smoothing. If the Surface Smoothing is used, its type is displayed as well. It can be:

  • Smooth
  • Medium
  • Sharp
Raster DSM:

Displayed if the DSM is generated. Displays which Method has been used to generate the DSM. It can be:

  • Inverse Distance Weighting
  • Triangulation

Displays if the DSM tiles have been merged into one file.

Orthomosaic: 

Displays if the Orthomosaic is generated. Displays if the Orthomosaic tiles have been merged into one file. Displays if the GeoTIFF without Transparency and the Google Maps Tiles and KML are generated.

Grid DSM: Displays if the Grid DSM is generated. Displays which Grid Spacing has been used. 
Raster DTM: Displayed if the DTM is generated. Displays if the Tiles are merged.
DTM Resolution: Displays the resolution used to generate the DTM.
Contour Lines Generation:

Displays if the contour lines are generated. Displays the values of the following parameters that have been used:

  • Contour Base
  • Elevation Interval
  • Resolution [cm]
  • Minimum Line Size [vertices]
Index Calculator: Radiometric Calibration: Displayed if the Radiometric Calibration has been used.
Index Calculator: Reflectance Map: Displayed if the Reflectance Map has been generated. Displays the Resolution at which it has been generated as well as if the Reflectance Map Tiles have been merged into one file.
Index Calculator: Indices: Displayed if Indices have been generated. Displays the list of generated Indices.
Index Calculator: Index Values: Displayed if the Indices have been exported as Point Shapefile Grid Size or as Polygon Shapefile. Displays the grid size for the generated outputs.
Time for DSM Generation: Time spent to generate the DSM.
Time for Orthomosaic Generation: Time spent to generate the Orthomosaic.
Time for DTM Generation: Time spent to generate the DTM.
Time for Contour Lines Generation: Time spent to generate the Contour Lines.
Time for Reflectance Map Generation: Time spent to generate the Reflectance Map.
Time for Index Map Generation: Time spent to generate the Index Map.

 

Camera Radiometric Correction
Camera Name

Displays the name of the camera.

Band

Displays the bands of the camera to which a Radiometric Correction was applied.

Radiometric Correction Type

Displays the type of Radiometric Correction applied to the images of a band: Camera Only, Camera and Sun Irradiance, or Camera, Sun Irradiance and Sun Angle


  

Was this article helpful?
7 out of 7 found this helpful