Swiftnav experiment: Improvements to the SNR

In my previous couple of posts, I evaluated the performance of a pair of dual freqeuncy SwiftNav Piksi multi receivers in a moving rover with local base scenario.  I used a pair of single frequency u-blox M8T receivers fed with the same antenna signals as a baseline reference.

It was pointed out to me that the signal to noise ratio (SNR) measurements of the rovers were noticeably lower than the bases, especially the L2 measurements and that this might be affecting the validity of the comparison.  This seemed to be a valid concern so I spent some time digging into this discrepancy and did indeed find some issues.  I will describe the issues as well as the process of tracking them down since I think it could be a useful exercise for any RTK/PPK user to potentially improve their signal quality.

Previously , in another post, I described a somewhat similar exercise tracking down some signal quality issues caused by EMI from the motor controllers on a drone.  In that case, though, the degradation was more severe and I was able to track it down by monitoring cycle slips.  In this case, the degradation is more subtle and does not directly show up in the cycle slips.

Every raw observation from the receiver generally includes a signal strength measurement as well as pseudorange and carrier phase measurements.  The SwiftNav and u-blox receivers both actually report carrier to noise density ratio (C/NO), rather than signal to noise ratio (SNR) but both are measures of signal strength.  They are labelled as SNR in the RTKLIB output, so to avoid confusion I will refer to them as SNR as well.  I will only be using them to compare relative values so the difference isn’t important for this exercise, but for anyone interested, there is a good explanation of the difference between them here.  Both are logarithmic values measured in dB or dB-Hz so 6 dB represents a factor of two in signal strength.

Since the base and rover have very similar configurations we would expect similar SNR numbers between the two, at least when the rover antenna is not obstructed by trees or other objects.  I selected an interval of a few minutes when the rover was on the open highway and plotted SNR by receiver and frequency for base and rover.  Here are the results, base on the left and rover on the right.  The Swift L1 is on the top, L2 in the middle, and the u-blox L1 on the bottom.  To avoid too much clutter on the plots, I show only the GLONASS SNR values, but the other constellations look similar.

snr1

Notice that the L1 SNR for both rovers is at least 6 dB (a factor of 2) lower than the base, and the Swift L2 SNR is more like 10 dB lower.  These are significant enough losses in the rover to possibly affect the quality of the measurement.

The next step was to try and isolate where the losses were coming from.  I set up the receiver configurations as before and used the “Obs Data” selection in the “RTK Monitor” window in RTKNAVI to monitor the SNR values in real time for both base and rover as well as the C/NO tracking window in the Swift console app.  I then started changing the configuration to see if the SNR values changed.  The base and rover antennas were similar but not identical so I first swapped out the rover antenna but this did not make a difference.  I then moved the rover antenna off of the car roof and onto a nearby tripod to see if the large ground plane (car roof) was affecting the antenna but this also did not make a difference.  I then removed the antenna splitter, but again no change.

Next, I started modifying the cable configuration between the receivers and my laptop.  To conveniently be able to both collect solution data and be able to collect and run a real-time solution on the raw Swift observations, I have been connecting both a USB serial cable and an ethernet cable between the Swift board and my laptop.  My laptop is an ultra-slim model and uses an etherent->USB adapter cable to avoid the need for a high profile ethernet connector.  So, with two receivers and my wireless mouse, I end up having more USB cables than USB ports on my computer and had to plug some into a USB hub that was then plugged into my laptop.

The first change in SNR occured when I unplugged the ethernet cable from the laptop and plugged it into the USB hub.  This didn’t affect the L1 measurements much but caused the Swift L2 SNR to drop another 10 dB!  Wrong direction, but at least I had a clue here.

By moving all of the data streams between Swift receiver and laptop (base data to Swift, raw data to laptop, internal solution to laptop) over to the ethernet connection I was able to eliminate one USB serial port cable.  This was enough to eliminate the USB hub entirely and plug both the USB serial cable from the u-blox receiver and the ethernet->USB cable from the Swift receiver directly into the laptop.  I also plugged the two cables into opposite sides of the laptop and wrapped the ethernet->USB adapter with aluminum foil which may have improved things slightly more.

Here is the same plot as above after the changes to the cabling from a drive around the neighborhood.

snr2

I wasn’t able to eliminate the differences entirely, but the results are much closer now.  The biggest difference now between the base configuration and the rover configuration is that I am using a USB serial cable for the base, and a ethernet->USB adapter cable for the rover so I suspect that cable is still generating some interference and that is causing the remaining signal loss in the rover.  Unfortunately I can not run all three streams I need for this experiment over the serial cable, so I am not able to get rid of the ethernet cable.

I did two driving tests with the new configuration, similar to the ones I described in the previous posts.   One was through the city of Boulder and again included going underneath underpasses and a parking garage.  The second run was through the older and more challenging residential neighborhood.  Both runs looked pretty good, a little better than the previous runs but it is not really fair to compare run to run since the satellite geometry and atmospheric conditions will be different between runs.  The relative solutions between Swift and u-blox didn’t change much though, which is probably expected since the cable changes improved both rovers by fairly similar amounts.

Here’s a quick summary of the fix rates for the two runs.  The fix rates for the residential neighborhood look a little low relative to last time but in this run I only included the most difficult neighborhood so it was a more challenging run than last time.

Fix rates

City/highway Residential
Swift internal RTK 93.60% 67.50%
Swift RTKLIB PPK 93.70% 87.90%
U-blox RTKLIB RTK 95.70% 92.80%
U-blox RTKLIB PPK 96.10% 91.10%

Here are the city/highway runs,  real-time on the top, post-process on the bottom with Swift on the left and u-blox on the right.  For the most part all solutions had near 100% fix except when recovering from going underneath the overpasses and parking garage.

snr4

Here are the same sequence of solutions for the older residential neighborhood.  This was more challenging than the city driving because of the overhanging trees and caused some amount of loss of fix in all solutions.

snr5

Here’s the same images of the recovery after driving under an underpass and underneath a parking garage that I showed in the previous post.  Again, the relative differences between Swift and u-blox didn’t change much, although the Swift may have improved a little.

snr1

Overall, the improvements from better SNR were incremental rather than dramatic, but still important for maximizing the robustness of the solutions.  This exercise of comparing base SNR to rover SNR and tracking down any discrepancies could be a useful exercise for anyone trying to improve their RTK or PPK results.

Advertisements

Underpasses and urban canyons

[Update: 4/17/18:  Although I don’t think it changes the results of this experiment significantly, there was an issue with apparent interference from a USB hub and ethernet cable on the rover setup during this testing.  See the next post for more details. ]

In my last post I demonstrated fairly similar fix rates and accuracies between an M8T single-frequency  four-constellation solution and a SwiftNav Piksi dual-frequency two-constellation solution.

One advantage often mentioned for dual frequency solutions for moving rovers is that their faster acquisition times should help when fix is lost due to a complete outage of satellite view caused by an underpass or other obstruction.  This makes sense since the dual frequency measurements should allow the ambiguities to be resolved again more quickly.

Since my last data set included several of these types of obstructions I thought it would be interesting to compare performance specifically for these cases.

To create the Google Earth images below I used the RTKLIB application POS2KML to translate the solution files to KML format files and then opened them with Google Earth.

Here are the raw observations for the first underpass I went under, Swift rover on the left, M8T rover on the right.  In this case there was an overhead sign just before the underpass which caused a momentary outage on all satellites followed by about a two second outage from the underpass, followed by a period of half cycle ambiguity as the receivers re-locked to the carrier phases.

upass2

Here’s the internal Swift solution for the sign/underpass combo above at the top of the photo and a second underpass at the bottom of the photo.  For the first underpass, the solution is lost at the sign, achieves a float solution (yellow) after about 9 seconds, then re-fixes (green) after 35 seconds.

upass5

Here’s the RTKLIB post-processed solution (forward only) for the Swift receivers with fix-and-hold low tracking gain enabled as described in my previous post.  It looks like a small improvement for both underpasses.  The solution loses fix at the sign but in this case maintains a float solution until the underpass.

upass6

Here’s the RTKLIB post-processed solution (same config) for the M8T receivers.  Notice the no-solution gaps after the underpasses are shorter.  In this case, for the upper underpass, a solid fix was re-achieved after about 21 sec.

upass7

Here’s a zoom in of the M8T solution (yellow dots) for the lower underpass.  If the position were being used for lane management it looks like the float solution would probably be accurate enough for this.  The other yellow line with no dots is the gap in the Swift solution.

upass8

Here’s a little further down the road.  At this point the Swift solution achieves a float position at about the same time the M8T solution switches to fix.  Lane management would clearly be more difficult with the initial Swift float solution.

upass9

Next, I’ll show a few images from another underpass.  In this case I drove under the underpass from the left, turned around, then drove under the underpass again from the right.  The Swift internal solution is on the left, the Swift RTKLIB solution in the middle, and the M8T RTKLIB solution on the right.  Notice that the time to re-acquire a fix is fairly similar in all three cases.

upass1

Here is zoom in of the two Swift solutions, they are quite similar.

upass3

Here is a zoom-in of the M8T RTKLIB solution.  Again, the float solution is achieved very quickly, and appears to be accurate enough for lane management.

upass4

My last test case was a combination urban canyon and parking structure.  In the photo below, I drove off the main street to the back of the parking garage, underneath the pedestrian walkway, into the back corner, then underneath the back end of the garage and then back to the main street.  I would consider this a quite challenging case for any receiver.

ucanyon1

Here are the raw observations.

ucanyon0

Here are the three solutions, again the Swift internal is on the left, the Swift RTKLIB in the center, and the M8T RTKLIB on the right.

ucanyon1

 

Here is an image of the Swift internal solution.

ucanyon4

Here is an image of the Swift RTKLIB solution

ucanyon3

And here is an image of the M8T RTKLIB solution

ucanyon2

In this case, the M8T RTKLIB solution appears to be the best.

So, this experiment seems to show that a dual frequency solution will not always handle satellite outages better than single frequency solutions.  In this case, the extra Galileo and SBAS satellites in the M8T solution seem to have helped a fair bit, and the M8T solution is, at least to me, surprisingly good.

If anyone is interested in analyzing this data further, I have uploaded the raw data, real-time solutions, and config files for the post-processed solutions to the sample data sets on my website, available here.  I should mention that there is an unexplained outage in the Swift base station data near the end of the data set.  This could have been caused by many things, most of them unrelated to the Swift receiver, so all the analysis in both this post and the previous post were done only for the data before the outage.

 

 

 

 

 

 

 

 

 

 

Improved results with the new SwiftNav 1.4 firmware

I last took a look at the SwiftNav Piksi Multi low-cost dual-frequency receiver back in November last year when they introduced the 1.2 version of FW.  They are now up to a 1.4 version of firmware so I thought it was time to take another look.  The most significant improvement in this release is the addition of GLONASS ambiguity resolution to the internal RTK solutions but they also have made some improvements in the quality of the raw observations.

I started with a quick spin around the neighborhood on my usual test route.  The initial results looked quite good, so for the next test I expanded my route to include a drive to and around Boulder, Colorado, a small nearby city of just over 100,000.  The route included some new challenges including underpasses, urban canyons, higher velocities, and even a pass underneath a parking structure.  This is the first time I have expanded the driving test outside my local neighborhood.

My test configuration was similar to previous tests.  I used a ComNav  AT330  antenna on my house roof for the base station, and a SwiftNav GPS-500 antenna on top of my car for the rover. I split the antenna signals and in both cases, fed one side to a Piksi receiver and the other side to a  to a u-blox M8T single frequency receiver.   I ran an internal real-time RTK solution on the Piksi rover and an RTKNAVI RTK real-time solution on the M8T rover.  The M8T receivers ran a four constellation single frequency solution (GPS/GLONASS/Galileo/SBAS) to act as a baseline while the Piksi receivers ran a two constellation (GPS/GLONASS) dual frequency solution.  Both rovers were running at a 5 Hz sample rate and both bases were running at a 1 Hz sample rate.  The distance between rover and base varied from 0 to just over 13 km.  The photos below show different parts of the route.

niwot_boulder

Here are the real-time solutions for the two receiver pairs, internal Swift on the left, and RTKNAVI M8T on the right.

swift14_3

Both solutions had similar fix rates (79.9% for Swift, 82.6% for M8T) and in both cases the float sections occurred for the most part either in the older neighborhood with larger trees (top middle) of after underpasses (bottom left).  The higher velocity (100 km/hour) on the highway (center) did not cause any trouble for either solution.

Based on a comparison of the two solutions, accuracy was relatively good for the fix sections of both solutions.  Below on the left, is the difference between both solutions for points where both solutions had a fix.  In the center and right are plots of both solutions (Swift internal=green,M8T RTKNAVI=blue) for the two locations with the longest duration discrepancies of any magnitude.  Both look like false fixes by the Swift internal solution, based on the discontinuities.  Overall, though the errors between the two were reasonably small and of short duration.

swift14_2

Post-processing the Swift data with RTKLIB produced the solution on the left below with an 85.5% fix rate and a good match to the M8T solution.  The difference between both solutions for the fixed point is shown on the right.  This solution was run with continuous ambiguity resolution.

swift14_4

 

For more challenging environments like this I often add some tracking gain to the ambiguities by enabling “fix-and-hold” for the ambiguity resolution mode but setting the variance  of the feedback (input parameter pos2-varholdamb) to a fairly large number (0.1 or 1.0) to effectively de-weight the feedback and keep the tracking gain low.  For comparison, the default variance for fix-and-hold mode feedback is 0.001 which results in quite a high tracking gain.  I find that with the low tracking gain, I generally do not have an issue with fix-and-hold locking on to false fixes.

Running RTKLIB solutions for Swift and M8T with this change (fix-and-hold AR enabled, pos2-varholdamb=1.0) improved the fix ratio for the Swift RTKLIB solution from 85.5% to 91.1% and the M8T RTKLIB solution from 82.6% to 92.6% with no apparent degradation in accuracy.

Using a combined solution instead of a forward solution (only a choice for post-processing) improved the fix ratios even further, again with no apparent degradation in accuracy.  The Swift RKLIB solution increased to a 96.2% fix rate and the M8T RTKLIB solution increased to 94.1%.

Overall, the Swift RTKLIB solutions were noticeably better and more consistent than in my previous test.  Considering the difficulty of the environment, I consider all of these solutions to be very good.

In my next post, I will look specifically at how the two receivers handled going through a narrow urban canyon, underneath three underpasses and underneath a parking structure.

Measuring a survey marker with the DataGNSS D302-RTK

 

In my last post I reviewed the D302-RTK receiver from DataGNSS.

datagnss1

Since the unit is designed for surveying, I thought it would be fun to use it to actually try and measure an actual survey marker, something I’ve never done before.

So, to find an official marker I went to the NGSDataExplorer website from the U.S. National Geodetic Survey group and brought up an easy-to-use interactive map of the local area with different types of survey markers marked.   Here’s a zoom in of the map showing a nearby marker I found with easy access and a decent sky view.

survey3

Clicking on “Datasheet” brings up the following information about the marker.

survey2

About halfway down, you can see that the NAD83 (2011) coordinates for this site are:

latitude               =   40 05 14.86880 N
longitude           = 105 09 01.68689 W
ellipsoidal height = 1613.737 meters

So let’s see how close we get to these numbers with the D302-RTK.  The D302-RTK uses a u-blox M8T receiver and a version of the demo5 RTKLIB code, so any results from this experiment should be valid not just for the D302-RTK, but for any M8T/RTKLIB based solution.

The survey marker is just over 1 km from my house and so I will use the antenna mounted on my roof connected to another M8T receiver as the base station.  The base observations are then broadcast to the internet with an RTK2GO.com NTRIP caster as I described in this post.  Using an M8T receiver for the base station allows me to enable GLONASS ambiguity resolution in the solution since both receivers are using identical hardware.

Since I am doing a real-time solution, I need internet service at the site of the measurement to get the base observations.  I got this by enabling a hot spot on my cell phone and connecting the D302-RTK to that.  I have surveyed the base station antenna with both RTK solutions from nearby CORS stations as well as online PPP solutions so I have a fairly good idea of it’s location, maybe within a cm or so.

The clamp and surveying pole in the top photo look nice but I don’t own anything like that, so instead I used a $20 camera tripod, a piece of wood, two rubber bands, a wood clamp, a piece of string, and a carriage bolt.  Here is the resulting setup aligned over the survey marker.  Note that I am using the optional helix antenna which was included with the receiver that was sent to me rather than an external antenna.

survey1

Here’s a close-up of the marker itself with the carriage bolt and string aligned over the top of the marker pin.   The other end of the string is fastened directly underneath the D302-RTK, making it easy to align the receiver directly over the marker and measure the vertical distance between the two.    If you’re wondering about the film canister at the bottom of the well, it seems that the marker is also serving as a local geocache location.

survey4

At this point everything is ready to go.  I powered on the receiver, enabled the RTK service for a static solution, waited five minutes and then recorded the position.  I did this three times to get three different measurements.  Here are some not-so-good photos from the screen of the D302-RTK for the three runs.  It is possible to store these values to the unit using an additional Android app, then uploading with a USB cable later, but in my case I just copied the values manually to my computer.

datagnss_screen123

The numbers are a little hard to read but are very consistent between results so I’ll just use the middle values.  The indicated height is the height of the base of the antenna, not the survey marker so I need to subtract the difference which I got by adding the length of string and bolt to the height of the receiver, in this case 1.49 meters.

latitude              =   40 05 14.8870 N
longitude           = 105 09 01.7374 W
ellipsoidal height = 1614.393 – 1.49 = 1612.903 meters

Comparing to the published survey numbers, we are not very close.  It’s a little less intuitive to compare degrees of latitude and longitude but the height is obviously off by almost a meter, so something is wrong.

Usually this kind of large error suggests there may be a mismatch between coordinates.  Sure enough, when I double-checked the base location that I had entered into the D302-RTK, I realized that I had used WGS84 coordinates, not NAD83.  As I mentioned earlier I had computed the base position using both RTK solutions from nearby CORS stations as well as online PPP solutions.  RTK solutions are always relative to the base station so will be in the same coordinate system as the base.  CORS stations locations in the U.S are normally specified in NAD 83 so using those coordinates would have been fine, but instead I had used coordinates from the online PPP solution which were in WGS-84 coordinates.  Since the RTK solution will be in the units of the base station, my results are in WGS-84 coordinates, and the published survey coordinates are in NAD83.

Fortunately there is another easy-to-use tool from the U.S. National Geodetic Survey that will translate from one coordinate system to another.  Entering the WGS84 coordinates from the measurement and translating to NAD83 gave the following:

survey3

You can see the WGS-84 coordinates on the left and the NAD83 translations on the right.  The translated coordinates from above are:

latitude                =   40 05 14.86828 N
longitude             = 105 09 01.68627 W
ellipsoidal height = 1614.393 – 1.49 = 1613.759 meters

Compare these to the marker coordinates we got from the NGS website earlier:

latitude              =   40 05 14.86880 N
longitude           = 105 09 01.68689 W
ellipsoidal height = 1613.737 meters

That’s looking much better.  The heights now differ by only about 2 cm, well within the expected vertical accuracy given the fact that my base station location was not exact, both antennas were uncalibrated, and my tripod setup was a little imprecise.

How about latitude and longitude?  The error in latitude is 0.00052 arc seconds and in longitude is 0.00062 arc seconds.  Those seem small, but unless you are a professional surveyor these numbers are probably not very meaningful to you, they certainly are not to me.

Meters would be much easier to interpret.  I usually use a free matlab geodetic toolbox, available on the Matlab file exchange to do these sorts of translations, but this time, let’s do it by hand.

At the equator, one arc second of longitude or latitude is approximately equal to the circumference of the earth divided by 360 degrees, then by 60 min/degree, then 60 sec/min.  A quick google search finds that the equatorial circumference of the earth is about 40.07 million meters.  Divide that by 360*60*60 gives 30.92 meters per arc second.  This is not exact but good enough for our exercise.  Arc-seconds of latitude remain nearly constant with location but arc-seconds of longitude will get smaller as we approach the poles of the earth by the cosine of the latitude.  I am located at approximately 40 degrees latitude, so 30.92*cos(40 degrees)=23.68 meters per arc second of longitude.

The error in latitude in our measurement from above was 0.0052 arc seconds.  Multiply this by 30.92 meters/arcsec gives an error in meters of 0.016 or 1.6 cm.  For longitude, multiplying 0.000062*23.68 meters/arcsec gives 0.0015 meters or 0.15 cm.

So total error in this exercise was 2.2 cm of vertical error and 1.6 cm of combined horizontal error.  Not too bad for two rubber bands, a piece of string, a bolt, and a wood clamp (as well as a nice low-cost receiver)!

Although I used the D302-RTK for this experiment, I believe the results would be very similar for any solution using a pair of M8T receivers and RTKLIB.

Post-processing ComNav receiver data with RTKLIB – a more in-depth look

In a couple of my recent posts,  I showed that with the latest firmware from ComNav, the internal RTK solution was very good even for a quite challenging moving rover test, significantly better than a post-processed solution using RTKLIB.   To recap, here are the results side by side, ComNav internal RTK solution on the left, demo5 RTKLIB post-processed solution on the right.  The internal ComNav solution had a 96% fix rate while the RTKLIB solution had only a 68% fix rate.  In the plots below of a drive around the residential streets near my house, green represents a fixed solution and yellow is a float solution.

comnav2_1

The RTKLIB solution shown is a forward solution to make it a more direct comparison to the internal solution.  Re-running it as a combined solution helps a little, but still only increased the fix rate to 71%.  Fix rates are only meaningful if the number of false fixes is small, but as I showed previously, the two solutions match quite closely where both are fixed.

Not only is the ComNav RTKLIB solution inferior to the internal ComNav solution, it is also inferior to an RTKLIB solution from a pair of much lower cost single frequency u-blox M8T receivers fed with the same antenna signals as the ComNav receivers.  As I showed in my last post, the RTKLIB M8T solution had an 88% fix rate and again matched the ComNav internal solution very closely when both were fixed.

So why does RTKLIB struggle so much with the ComNav data?  My suspicion is that there is nothing inherently lower quality about the data, it’s just that it’s flaws are different from the flaws of the u-blox data and that RTKLIB, particularly the demo5 version, has evolved to handle the flaws of the u-blox data better than it has for the ComNav data.  Specifically, what I see, is that the u-blox receiver is much better at flagging lower quality observations than ComNav (or other receivers) are.  This puts the burden on the solution code to appropriately handle the lower quality observations, something RTKLIB is not particularly good at.

To test this theory, I ran an experiment with a modified version of RTKLIB.  Usually I like to use the demo5 code for my experiments since it’s available to everyone, but for this exercise I used some experimental code I have been working on for a while now that helps RTKLIB to handle a wider range of measurement quality.  This code doesn’t do anything fundamentally different from the current demo5 version of RTKLIB.  No wide-lane or ionospheric-free linear combinations or any other tricks that are only available with dual frequency measurements.  All it does different is a better job of rejecting or de-weighting lower quality measurements and a more comprehensive search of the integer ambiguity space to find a clean set of ambiguities to use for a fix.

My thinking is that if RTKLIB code with just these changes can match the internal ComNav solution then that would give me confidence that the core mathematical algorithms in RTKLIB are fundamentally sound and capable of handling dual frequency solutions.   If not, then maybe RTKLIB requires some more significant changes to the solution algorithms to take better advantage of the dual frequency measurements.

When I started working with RTKLIB a couple of years ago I found similar performance issues when processing u-blox solutions, and found that relatively minor code changes could made a big difference.  RTKLIB is a fantastic resource but sometimes it is more of an academic toolbox than a true engineering solution.  This is not surprising or unexpected, given that the developers are in fact using it for academic research.

So how did the modified code work?  Here’s a forward solution with the modified RTKLIB code on the left and the difference between this solution and the ComNav internal solution on the right.  The 16.27 meter difference in the vertical axis is due to different handling of the offset between geoid and ellipsoid as I described in my last post.

comnav2_2

You can see a big improvement.  The fix rate has increased from 68% to 99% and the vast majority of the differences between the two solutions are still less than 2 cm in the horizontal axes and 4 cm in the vertical axis.  Again, these are combined errors of both solutions, so I consider these numbers quite good.

Of course, the code improvements will affect the M8T single frequency solution too, so I also re-ran that solution to complete the comparison.  And just to make things a little more interesting, I also re-ran the ComNav solution with the modified code using only the L1 observations.  I did this to try and separate how much benefit is coming from the higher cost receiver in general and how much is coming from the L2 measurements specifically.

Here’s the M8T solution on the left with a  94.4% fix rate, and the Comnav L1-only solution on the right with a 98.9% fix rate.  The most challenging measurement environment was in the older neighborhood with larger trees that shows up as the small squares in the far left.  You can see that the ComNav L1 solution is noticeably better than the u-blox M8T solution in this area.

comnav2_3

The two receivers did not start collecting data at exactly the same time so I did not include the time to first fix in this comparison.  If I remove that from the ComNav L1/L2 solution above, that increases the fix rate in that solution from 99.0% to 99.1%, slightly better than the 98.9% achieved by the L1 only solution.  The differences in both solutions relative to the internal ComNav solution appeared similar to the errors plotted above for the RTKLIB L1/L2 solution.

So, switching from the u-blox receiver to the ComNav receiver but using only L1 for both solutions improved the fix rate from 94.4% to 98.9% .  Adding L2 to the ComNav solution then increased the fix rate from 98.9% to 99.1%.

This data set is the most challenging I’ve run to date, and so I consider all of these results as quite good.  To provide some level of calibration, here are the observations for the ComNav rover.  You can see there are a significant number of cycle slips and missing data.  There are even more in the M8T observations, but the M8T data does include the Galileo and SBAS satellites as well.

comnav2_4

I’ve covered several different results very quickly, so here is a quick summary of all the experiments.

ComNav internal solution:                              Fix rate = 96%
ComNav demo5 RTKLIB L1/L2 solution:      Fix rate = 68%
M8T demo5 RTKLIB L1 solution:                   Fix rate = 88%

ComNav modified RTKLIB L1/L2 solution:  Fix rate=99.1%
ComNav modified RTKLIB L1 solution:        Fix rate=98.9%
M8T modified RTKLIB L1 solution:               Fix rate=94.4%

So what can you conclude from this experiment?  This is what I get from it:

1) The existing core RTKLIB  algorithms are capable of high quality dual frequency solutions if the flaws in the observations are properly handled.

2) The current demo5 RTKLIB code is better matched to the M8T observations, so the opportunity for improvement is smaller than it is for the ComNav observations but there is still some opportunity even with the M8T.

3) A significant fraction of the improvement in the ability to maintain a fix on a moving rover between M8T and ComNav is likely not because of the additional L2 measurements, but simply because the overall quality of the more expensive receiver is higher.  The dual frequency measurements likely have a more significant advantage when it comes to faster first fixes and longer baselines.

The code is experimental only at this point and needs more work before it is ready for release but I do hope to make some form of it available eventually.

Further comparison between ComNav and u-blox RTK solutions

In my last post I showed that the new 3.6.5 ComNav firmware greatly improved the internal ComNav real-time RTK solution.  However there was still a significant discrepancy n the U-D axis results between the ComNav solution and an RTKLIB/M8T solution. Here are the differences between the ComNav internal solution and an RTKLIB solution for two u-blox M8T receivers connected to the same antennas as the ComNav receivers.  In this experiment, the base station was an antenna mounted on my roof and the rover was an antenna mounted on top of my car while driving around a residential neighborhood.

k708_6

As you can see, the vast majority of the measurements in the horizontal axes (E-W and N-S) differ by less than two centimeters.  This represents the combined error of both solutions, the error in either solution by itself is smaller, so I consider this a good match.  In the U-D axis though, the two solutions often differ by over 10 cm.  The vertical errors will generally be roughly double the horizontal errors, but they should not be this large, so this warrants further investigation.

In the above plot, the errors are plotted as a function of time, and in this perspective, the errors appear to be a random drift.  I noticed however that the largest errors seemed to occur when the rover was most distant from the base, so I plotted the U-D error as a function of distance to base instead of time.  This is what it looks like plotted this way.

geoid1

Clearly, there is a very strong linear relationship between baseline and error.   This immediately made me suspect an issue with coordinate systems.

I had previously seen a somewhat similar problem in the vertical axis when I had collected data from a sailboat and found that one side of the lake was nearly a meter higher than the other side!  In this case the issue was simply that the plots were in ENU coordinates which represent a plane tangential to the surface of the earth rather than a surface that followed the curvature of the earth.  The surface of the lake of course follows the curvature of the earth so will not appear as equal height when plotted on a flat plane.  With a local base station, the differences between the two tend to be small, but in my case I was using a fairly distant base which magnified the difference.

However, in this case, both solutions were saved in LLH coordinates, and although converted to ENU coordinates by RTKPLOT, any effect from the coordinate transformation should be equal for both solutions.   So that’s not the answer.

After a little digging into the ComNav documentation I found that it reports solution heights in geodetic height, not ellispodial height.  This means they are relative to a geoid model of the earth, rather than a simpler ellipsoid model.   The geoid model approximates a surface with equal gravitational force at all points.  The ellipsoidal model, on the other hand, is simply an ellipsoid that approximates the shape of the earth.  Of course, it would be too simple if there were only one ellipsoid model and one geoid model, and in fact there are multiple different versions of both types of models.  Fortunately RTKLIB and ComNav both default to using the WGS84 ellipsoid and the EGM96 geoid, so this simplifies things at least a little bit.

In RTKLIB, the solution height can be chosen to be outputted in either ellipsoidal or geodetic height using the “out-height” config parameter.  I usually set it to ellipsoidal height and that was what it was set to for this experiment, so this is at least part of the answer.  However, the geoid model and the ellipsoidal model differ by about 16.27 meters in my location, while the two solutions only differed by about 0.1 meters , so the explanation is a little more complicated than just changing this setting.

But first let’s start by doing that.  Here is the difference in U-D measurements after recalculating the RTKLIB solution with the “out-height” config parameter set to “geodetic”.

geoid2

Interesting, with both solutions calculated in geodetic mode, the time varying error has disappeared completely, but now the DC offset between the two models (16.27 meters) has appeared instead!

This appears to be caused by how the two solutions interpret the base station location.  It seems that RTKLIB is interpreting the base station height as ellipsoidal regardless of the output format of the solution.  This seems reasonable, otherwise you would have to change the base station location when you changed the format of the solution.  ComNav, on the other hand, looks like it always interprets the base station height as geodetic, which isn’t wrong either, as long as you are aware of it.  I can eliminate the difference between the solutions either by specifying the ComNav base station location with geodetic height or, alternatively, the ComNav “UNDULATION” command does have a parameter to specify an offset to the geodetic model which probably would work as well.

So, with this change, I now have a very good match between two quite independent solutions.  The first solution is using ComNav receiver hardware and ComNav solution algorithms with GPS and GLONASS satellites using the L1 and L2 measurements.  The second solution is using u-blox M8T receiver hardware and RTKLIB solution algorithms with GPS, GLONASS, Galileo, and SBAS satellites using only the L1 measurements.   Although it is not impossible that there is some systematic error that affects both solutions, I do find it quite encouraging to see such good matches between two solutions with so many differences in inputs.

If you are interested in reading a more detailed discussion of the challenges of using geoid models in GNSS solutions as well as links to discussions of similar challenges with horizontal coordinate systems, see this article.

 

 

 

Improved results with the ComNav K708 receiver

In a recent post, I compared a pair of ublox M8T single frequency receivers to a pair of ComNav K708 dual frequency receivers.  For static rovers, the more expensive ComNav receivers showed a definite advantage with much faster times to first fix.  I was less impressed however with the ComNav results for a moving rover, especially since I have read several very positive reviews of the ComNav K501G receiver.  The K708 is supposed to be a newer model that is similar in capability and price to the K501G.

My comparisons of the internal ComNav real-time solution to an M8T RTKNAVI real-time solution showed very little difference between the two.  Digging into the data a little deeper, the results were actually more disappointing.  If I cut off the beginning and end of the data where the rover wasn’t actually moving, then the comparison between the two solutions looked like this, with the ComNav receivers on the left and the M8T receivers on the right.

k708_1Fix percentage for the ComNav receivers was 73.3% and for the M8T receivers it was 84.5%.  Comparing the two solutions where they were both fixed showed very little difference between the two so I think they were roughly equivalent in accuracy where they had a fix but the single frequency M8T solution was fixed a higher percent of the time.

I sent the results to ComNav and asked if they had any feedback or suggestions.  I got a very detailed answer in reply and a new firmware to try.  My receivers were running firmware version 3.5.7 and they had apparently also seen similar issues with this code.  They sent me firmware version 3.6.5 in a Windows executable form that made it very easy to upload to the receivers.

I then re-ran a repeat of the previous experiment comparing the two receiver pairs with a moving vehicle, shared antennas, and short baseline.  I was very pleased to see that the ComNav results were significantly better this time!  I actually had to deviate from my normal testing route to find some more challenging roads since I was getting near 100% fix rate on both the real-time internal ComNav solution and a RTKNAVI M8T real-time solution.  Fortunately I was able to find some narrower streets with larger trees that was able to differentiate the two solutions.  Here are both solutions for the full route, ComNav on the left, and M8T on the right.  The M8T solution was similar to the previous run with a 87.8% fix rate but the ComNav fix rate jumped to from 73.3% to 95.8%.

k708_2

Focusing on the more challenging part of the route showed an even bigger difference with the ComNav fix rate at 90.0% (left) and the M8T fix rate at 61.1% (right)

k708_3

Comparing the two solutions for the fixed points showed a good match everywhere outside of the most challenging area, so I don’t believe there were any significant false fixes in that part of either solution.  In the more challenging section there were a couple of what looked like false fixes in the M8T data, the longest one lasted for about 6 seconds.  They are visible as the shorter green blips on the right plot in the U-D axis.

Here’s a couple spots from Google map images that show what the more challenging environment looked like.  Of course, the leaves are off the deciduous trees now so it is a little less challenging now than when these pictures were taken.  I am surprised though that the differences between summer and winter are not as great as I would have expected.

k708_7k708_6

 

 

 

 

 

 

 

Post-processing the M8T data using combined mode (running the kalman filter forward and backward over the data) does help some.  Running a post-processed solution this way increased the overall fix rate from 73.3% to 86.9% and eliminated any false fixes over 1 second.   Better, but still not as good as the ComNav solution.

Here is the difference between the real-time ComNav internal solution, and the RTKLIB post-processed M8T solution for the fixed points.  This is the combined error of the two solutions, so the error in each individual solution will be less than this.  The two solutions are quite independent given that they were computed with different software, measured with different receivers, and used different sets of satellites (GPS/GLO L1+L2 vs GPS/GLO/GAL/SBAS L1).    The E-W and N-S errors look quite small, the U-D error is a little bit larger than I would like to see, but it is difficult to know if this error is equally spread out between the two solutions or dominated by one solution.

k708_6

I am not used to seeing this type of low frequency error in the U-D axis.  If I compare the U-D axis between the post-processed M8T and ComNav solutions (different receivers/different satellite sets) , I do not see this slow drift of +/-0.1 meters.  I’ve plotted it below. This makes me a little suspicious that it is coming from the ComNav solution but it is far from convincing proof.

(Note 2/26/18:  It turns out that this difference in the U-D axis is because the ComNav solution uses geodetic height and the RTKLIB solution in this case was set for ellipsoidal height.  The mean difference between geodetic height and ellipsoidal height cancelled out because the base location was specified in ellipsoidal height but the variation between the two still appears as error between the two solutions)

k708_7

To process the ComNav raw observations with RTKLIB I had to make the same edits to the headers of the observation files that I described in my previous post.  This is to prevent RTKLIB from throwing away the L2C data.  After making these edits and running a combined solution on the data, I get this solution.

k708_5

Unfortunately, with only 68.5% fix rate, this is not nearly as good as the ComNav internal solution.  I hope to investigate this further to see if there are any improvements to the config settings or to the code that might help.  For now, though, I would not recommend using RTKLIB to post-process raw ComNav data, at least for more challenging data sets like this.

However, if you can work with the real-time solution, then I will say that this was a significant milestone in that it was the first moving rover experiment where I have compared a low-cost dual frequency receiver to an M8T and found the dual frequency receiver to be significantly better!

I believe that ComNav sells software for post-processing their data so that might be an option as well for those that don’t want the limitations of real-time data processing.

That’s it for the general analysis.  For anyone interested in the details of the experiment setup I should mention a couple things.   I did describe the setup in more detail in my previous post and for most part the setup here was the same.  However, this time, I did change the default dynamics mode from “foot” to “land” using the command “rtkdynamics land”  This is intended for land vehicles with speeds up to 110 km/hr and seemed more appropriate for my experiment.  The manual says this setting is only for advanced users and recommends most users leave it at the default setting of “air” but my receivers seemed to default to “foot”.   Also, the command I used previously to set the rtk quality level, “rtkquality normal” has been changed in the new firmware to “rtkqualitylevel normal”.  This change has not made it to the user manual yet.   ComNav recommends leaving this set to “quick” for moving rovers and only using “normal” if needed to avoid false fixes for static rovers.  For this experiment I left it set to “normal”, mostly because I forgot about it before running the experiment.

The new firmware is not yet available on the ComNav website but they tell me it will be available soon.  In the meantime, you can email them to request it.