Modeling small objects can be challenging due to the difficulty of achieving a good image acquisition plan. To facilitate the process, Pix4D has developed a system which makes the image acquisition easier and ensures that the dataset will have the required overlap. It also improves the accuracy of the camera calibration for objects with less texture and completes the surface of the model from multiple datasets of the same object taken from different points of view.
In this approach, the camera is not moving but it is fixed and the object is rotating on a turntable with visual markers.
1. Download the Pix4D visual markers from here.
2. Print the Pix4D visual markers on non-glossy paper. The diameter of the circle enclosing the markers should be around 30 cm.
3. Wrap a turntable with the printed paper.
4. Place a roll of white soft cardboard behind the turntable to hide the background.
5. Mount the camera on a tripod to ensure that it will be steady during the acquisition.
1. Orient the turntable so that the small arrow in the center of the turntable points away from the camera, as in the image below.
2. Place the object on the turntable.
3. Capture the first image.
4. Rotate the turntable by approximately 15 degrees and take a picture. The next marker of the exterior ring should be in front of the camera.
5. Repeat step 4 until one full rotation of the object is completed.
6. Turn the object and place it on the turntable again.
7. Repeat steps 1 to 6 to capture a second dataset from another point of view to fully reconstruct the object. Capture as many datasets as needed to capture the object completely.
- Using the Pix4Dtagger to generate the .p4d project file
- Using Pix4Dmapper to process the subprojects
- (optional) Using Pix4Dmapper to merge the subprojects
1. Open the Pix4Dtagger. The executable is located in the same folder as the Pix4Dmapper executable. Its location depends on where Pix4Dmapper is installed. For example, it could be: C:\Program Files\Pix4Dmapper.
2. In Image directory, click Browse... and select the directory where the images are stored. Use Pix4Dtagger for one subproject at a time.
3. (optional but recommended) In Tags coordinates file, click Browse... and select the *.csv file in which the tags that were used in the project and their corners are stored. The file can be downloaded from here.
4. (optional but recommended) In Camera models file, click Browse... and select the camera database in: C:\Users\USER_NAME\
5. In Output file, click Browse... and select the name of the output file and where it should be stored.
6. In Format, select Pix4D project file (*.p4d) and make sure that Export image geolocation and orientation is selected.
7. Click Start to start exporting the project file.
8. Repeat steps 1-7 for each dataset captured.
9. Click Close to close Pix4Dtagger.
1. Open each of the subprojects generated in Pix4Dmapper by double clicking it.
2. In the Processing Options of step 1. Initial Processing, in the Matching Image Pair section of the Matching tab, select Free Flight or Terrestrial: 205433155.
3. In the Processing Options of step 1. Initial Processing, in the Calibration tab, select Accurate Geolocation and Orientation calibration method:205327965.
4. In the Image Properties Editor window, edit the Accuracy Horz and Accuracy Vert values to 0.10 m. For more information: 202557949.
5. Process step 1. Initial Processing.
6. Repeat steps 1-5 for each subproject generated.
1. Clear the image geolocation of the images by clicking Clear in the Image Geolocation section of the Image Properties Editor: 202557639.
2. Remove all GCPs of the visual markers from the subproject: 202557919.
3. Mark at least 3 Manual Tie Points in the common area between the subprojects. The common MTPs should share the same name in all the subprojects that they appear.
4. Repeat steps 1-3 for all subprojects.
5. Merge the subprojects following: 202558529.
6. Process step 2. Point Cloud and Mesh for the merged project.
7. (optional) Edit the point cloud to remove the points of the visual marks reconstructed, following: 202560499.
8. (optional) Regenerate the 3D Textured Mesh without the visual marks, following: 202560669 (section: After processing step 2. Point Cloud and Mesh).