Ardenna Blog

Geolocation -- why it's hard to know where you've been

April 18, 2018

by Kevin McDonough

What is geolocation?

Geolocation is the estimation of the real-world geographic location of an object through different means. These means include GPS or relative position to known pre-located objects. Essentially, it is the answer to the question, ”where are you?”

 

While this may sound relatively straight forward, it can be very hard under certain circumstances. As an example, surveyors use very precise, stationary equipment to make repeated GPS and relative distance measurements of a single location. Because the equipment is stationary, the repeated measurements can be averaged together to find an accurate measurement. However, if you attached your measurement devices to a moving object, you lose the benefit of repeated measurements—this is the condition under which the UAS inspection industry operates.

 

Why it is hard to use geolocation during UAS inspection?

As we use UAS to perform routine inspections, we want those inspections to have results that can be correlated with previous, and future, inspections. This is essential for estimating component life expectancy. However, to do this, you need to know that you are inspecting the same specific object or component each time.

 

In the case of inspecting discrete infrastructure, such as wind turbines in a wind farm, this problem is solved in a straight forward manner. Before using drones to inspect, each turbine must be pre-located—this is a natural assumption because this is done when building the wind farm. As such, each turbine, which is known in in size and shape very well, is then inspected through relative measurements with the location of its based known. Also, because each discrete turbine is located enough distance away from other turbines, there is a low chance of any individual turbine being confused with another.

 

This, however, is not the case for inspecting continuous infrastructure like rail roads, power lines, or pipelines. The components of these linear infrastructure are not discretely separated from each other. Accurately measuring the relative distance between components and the actual location of a single component are typically very useful in the inspection process. This brings us back to our primary question—what happens when your location sensor is not stationary, but is moving?

 

To start, let’s examine how well a UAS knows its position. Mid to high-end commercial GPS units can have errors in the tens of feet. This isn’t inherently bad—it’s in fact a condition of the physics governing GPS signals. Using ground-based systems, like Differential GPS augmentations, the GPS error can be reduced to inches. But these systems are expensive and would require building an array of known-location transmitting stations along the infrastructure to be inspected if a system doesn’t already exist (e.g. Nationwide DGPS). The internal measurement unit (IMU) of the UAS can also be used to augment and enhance the GPS signal, but precision IMUs are very costly and still come with inherent accuracy issues due to integration bias.

 

When surveyors use GPS, they are repeatedly measuring the same location and through multiple measurements of the same location they are able to reduce the GPS error significantly. UAS, especially fixed-wing UAS that are often used for linear infrastructure inspection, typically cannot use this technique and are subject to the inherent error in the GPS signal. So, our first source of error is the GPS location of the inspecting UAS itself.

 

Next, let’s examine the errors in where the camera is pointing. Consider a camera that is affixed to a moveable gimbal which is attached to the UAS. Mid to high-end commercial gimbal systems can be very accurate—to the tenth or hundredth of a degree. That is, it knows pretty well how it is pointed relative to its base and its base is affixed to the UAS in a known way. However, at high altitudes, these small errors can result in large translational errors. A 0.1° total error in point angle at 100 ft can yield about a 1.5-foot error.  Some of this error could be counteracted by having a stationary camera attached to a UAS, but this will greatly limit the variability of the images the UAS can capture during inspection.

Finally, let’s examine how the motion of the UAS can impact the image being captured. Imagine you are in a car driving down the highway. You see a pasture full of cows and you want to take a picture of one of them. So, you get your camera ready, you point it in the direction you want, and when the time comes, snap; the perfect picture is yours. But when you look at the picture, you only got half of the cow—your timing was off. If you had pulled the car over, or slowed way down, you might have been able to get a better picture, but alas, you were in a hurry and on a schedule. This is an issue for all moving vehicles that are trying to take a picture—taking the picture at the right time relative to your speed. Mid to high-end camera/gimbal packages account for this error and do their best to tune it out.

 

Now, imagine this scenario for the UAS doing linear infrastructure inspection. We are wanting to take a picture of a specific component of the linear infrastructure—let’s say a specific rail road tie that has a known location. Our GPS unit is telling us that we are approaching this location, we point our camera using the gimbal and we snap the picture. Did we get it? Maybe—if our system is well tuned and the rail road tie in question has some distinguishing characteristics (e.g. a label, an observed defect), then we might be able to easily tell if the correct rail road tie is in the captured image.

 

However, if the rail road tie in question looks like every other rail road tie near it, then how do we know if we captured it? In short, we don’t. If we only have one image that is subject to typical GPS, gimbal, and timing errors, we can only assess the likelihood that it was captured.

 

So why use UAS-based inspection at all?

Simply put, it is efficient and independent of the infrastructure itself. Rail roads currently use a special rail car that drives along the tracks to snap images of the rail infrastructure. However, it can only capture images along a single track at a time and requires a team of specialized technicians to operate it. With UAS based inspection, multiple parallel tracks can be inspected at the same time. Also, the UAS can be deployed to area of critically damaged track (e.g. after a wash out or mud slide) where the special rail car cannot traverse.

The errors discussed above can be estimated a priori and math models can be used to estimate the likely impact they have on the captured images. In addition, images that have known located structures can be used as anchor points. These anchor points, when used with image stitching, can greatly improve the accuracy of the geolocation of systems like this. Also, as UAS inspection becomes the norm, which it will, infrastructure will be developed so that these anchor points are more frequent along the infrastructure—imagine a QR code attached to different components that is visible by UAS inspection.

 

In addition to all of these reasons, there is an additional important reason, maybe the most important. UAS inspection is safer and quicker for the inspectors. Bridge inspectors, powerline inspectors, road surveyors, even building inspectors—they all have to contend with enormous risk to due their job. They are required to climb on the infrastructure that they inspect, brave the terrain the infrastructure exists in, or interact with the traffic along the infrastructure. With a UAS, they can be safely situated on the ground, away from harm, performing their vital job.

 

Return

Ardenna

81 Research Drive | Hampton, VA 23666

www.ardenna.com

info@ardenna.com

© 2018. Ardenna. All Rights Reserved.