ArcGIS Drone2Map

Understanding the Drone2Map Processing Report

ArcGIS Drone2Map allows you to reconstruct your drone imagery into usable imagery products. However, the quality of the output products matters, and that’s where the processing report comes into play. The processing report in Drone2Map gives you insights into the quality and accuracy of your project as well as a summary of the options you defined for processing. The processing report is provided after the initial adjustment has been run and after products are generated. It is important to understand how to interpret the report to achieve the best quality products.

This blog post describes the sections of the processing report and how to quickly understand the information being presented.

Project Summary

The Project Summary section of the processing report.

The Project Summary section shows high-level information about your project as well as total processing time. One of the most useful fields to reference in this section is the Images field. This field shows how many images are in the project and how many images were calibrated. If you see that a high number of images were not calibrated, there may be issues with how the project is configured or how the flight was flown. You can narrow down the potential cause by reviewing other sections of the report.

Adjust Images

The Adjust Images section of the processing report.

The Adjust Images section provides you with the most pertinent information as to why images may be dropped from processing. Much in the way that you would put together a jigsaw puzzle by matching pieces, Drone2Map uses photogrammetry to do the same with your imagery. The software looks for neighboring images that contain the same overlapping features and creates tie points between those images. This process is typically done thousands of times throughout your project and provides a web of connections (solution points) that Drone2Map uses to align the images. Simply put, the more tie points and solution points that your project has, the more likely your project is to reconstruct properly. However, this alone still doesn’t guarantee a quality reconstruction.

The processing options that you initially select have a substantial bearing on the level of detail that the photogrammetry process can pull from your imagery. These options tend to be a trade-off between speed and quality. For example, setting Initial Image Scale to 1 (Original Image Size) results in the most points and matched images, but the process takes much longer since the software is matching images at their native resolution. You may also only end up with a slight increase in tie points or solution points versus setting the image scale to ½ (Half image size). In the end, the extra time spent on processing may not actually increase quality.

Since doing the image adjustment is a time-consuming process, most project templates in Drone2Map have the Refine Adjustment box checked on by default. This saves time by setting Initial Image Scale to a lower image size to provide a rough image adjustment. Then once you have the images in the rough location, another pass is done at a higher image resolution for a more detailed tie-point extraction.

Though it is often overlooked, one of the biggest contributors to uncalibrated images is the size of the matching neighborhood being used. When Drone2Map is looking for matching images, this setting limits the bounds to how far out it can search. If imagery is not calibrating properly but your overlap between images is high, increasing the neighborhood to the next level (for example, Small to Medium) will likely provide more matches and subsequently more calibrated images. Much like increasing the Initial Image Scale setting, there are diminishing returns on increasing the neighborhood. If you set the neighborhood too large, you increase your processing time with little gained in terms of improved quality.

Camera information is also included in this section. Drone2Map provides only necessary camera values for processing using an internal database before the adjustment step is run. Therefore, we do not show a comparison between initial and optimized camera values. You will only see the optimized camera values that Drone2Map calculated for image orientation. On an RTK or PPK camera, you can enable a check box in the project’s processing options to override the default Drone2Map values in favor of your drone’s camera parameters.

The Tie Points Per Image section of the processing report.

The first graph in the report is the Tie Points Per Image chart. It displays every image in the project and a scale bar of how many tie points were derived for that image. This can be useful if you are experiencing adjustment problems and have no idea what images might be causing the issues. Images with a low number of tie points would be the first areas to investigate. The key takeaway being the higher number of tie points per image, the better the adjustment quality.

The Tie Point Reprojection Error section of the processing report.

The Tie Point Reprojection Error chart helps to visualize how well the project parameters were optimized during the adjustment step to fit the observed tie points. These parameters include image locations, image attitude angles, camera settings and solution points. When multiple tie points correspond to the same ground point a single solution point is created. After the adjustment step, every solution point is projected back onto the images using the attitude angles and camera settings. These projected image points usually deviate from the observed tie points and so the Tie Point Reprojection Error shows the magnitude of that deviation. A small reprojection error confirms that the image network connected by the tie points is in good shape and will be a solid base for generating output products.

In most cases, an optimal adjustment has a Tie Point Reprojection Error that is less than 1 pixel. The graph above illustrates a good quality adjustment where most of the tie points are under 1 pixel in error. Drone2Map allows you to define a tie point residual error threshold in pixels to exclude any tie points above the error value from the adjustment.

The Standard Deviation of Exterior Orientation section of the processing report.

The Standard Deviation of Exterior Orientation can be best described as the uncertainties of the optimized Exterior Orientation parameters, which include image location and attitude angles. The smaller the Standard Deviation of Exterior Orientation, the more reliable the Exterior Orientation.

Usually, images close to the border of a project have larger Standard Deviation than images close to the projects physical center, because they have less connections to other images in their neighborhood. Additionally, images with less tie points generally have larger Standard Deviation of Exterior Orientation than images with more tie points. Images with more evenly distributed tie points usually have smaller Standard Deviation of Exterior Orientation. In general, less accurately adjusted images usually have larger Standard Deviation of Exterior Orientation. A mean Standard Deviation of less than 2 times of ground resolution indicates a good adjustment.

The Adjusted Image Positions graphic of the processing report.

The Adjusted Image Positions graphic visualizes the shifts that can occur to the center point of your images after the adjustment has been run. The blue points indicate where your imagery was initially positioned, and the green points show where they were reprojected. If you see significant shifts between the points, it may indicate poor GPS collection.

The Image Overlap graphic of the processing report.

The Image Overlap graphic displays a scale of where the highest or lowest areas of overlap are within your project. Dropped images can be a sign of poor overlap within a flight area. Ideally, the areas within your project should have significant overlap to achieve the best quality adjustment and output products. For more on how to configure projects for the best results, see the Tips for Collecting Drone Data for Drone2Map blog post.

The Cross Matches graphic of the processing report.

The final graphic in the report shows you the cross matches between your images. This is a quick way to see which areas of your project have the highest concentrations of tie points. Areas that are toward the purple side of the scale have a higher number of tie points and subsequently will likely reconstruct better as the software will have more information from which to do so. If you see areas that are heavily yellow or low in tie point count, you may need to increase the image scale at which you are trying to generate tie points or expand the neighborhood to potentially get more matches based off neighboring images. This graphic can also give you an idea of any features in your imagery that may be consistently hard to reconstruct so if you are flying the same area or objects in the future, you can increase the flight settings accordingly.

The Solution Points section of the processing report.

Solution points are derived from the tie points that are collected throughout the dataset. A single solution point can consist of multiple tie points. These are the points that the software uses to adjust the images. The general idea is the more solution points with a higher number of image matches, the more accurate your project will be.

Geolocation Details

The Geolocation Details section of the processing report.

When incorporating ground control points or check points into your project, you will see the Geolocation Details table. This table summarizes the accuracy of each point in the horizontal (dX), vertical (dY), and altitude (dZ) directions. This accuracy calculates the difference the point had to shift after adjustment in each of those directions compared to the original location where the point was imported. Once the Adjust Images step is run, Drone2Map adjusts the points to an optimal location using a best fit method for all the points. A projection error value shows how far the point had to shift from its initial location to fit the adjusted location. The status column will display the number of links that were used to adjust the point. This is provided as a ratio; for example, 3/5 means three out of the five links were used for the points adjustment.

Dense Matching

The Dense Matching section of the processing report.

The Dense Matching section summarizes the Dense processing step. The processing splits the dataset into tiles to work more efficiently, and you will see a count of how many tiles it needed to create. Expect a higher processing time when increasing the point cloud density.

Project Settings

The Project Settings section of the processing report.

The Project Settings section summarizes the hardware, software version, coordinate systems, and resolution used on the project. If you are on an Advanced license and using preprocessing layers, you will also see if those layers were enabled. While the section may seem like basic information, it can be useful when troubleshooting issues, especially when this report is shared with Esri Technical Support.

2D product

The 2D Product section of the processing report.

The 2D Product section summarizes how long it took for each enabled product to complete processing. Elevation layers usually process quicker than an RGB True Ortho image.

3D Product

The 3D Product section of the processing report.

The final section, 3D Products, shows processing time for enabled products and confirms which 3D products you had enabled with a yes or no answer.

About the author

Mark Barker

Mark is a Product Engineer for the Drone2Map team, with interests in remote sensing, technical writing, and graphic design. He is a California native and started his Esri journey in 2016.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments

Next Article

Inspect assets with ArcGIS Drone2Map

Read this article