Comparison of the u-blox M8T to the u-blox M8P

I recently acquired a couple of  u-blox C94-M8P receivers and so I am now able to do a direct comparison between the u-blox M8T and M8P modules, something I’ve been wanting to do for a while.

I believe the hardware between the two receivers is identical, the differences are only in firmware.  The most significant difference in firmware is that the M8P includes an internal RTK solution.  In order to squeeze this into the firmware, however, they had to remove support for the Galileo and SBAS constellations, so this is another fairly significant difference.

Cost, of course, is also different.  The M8T receivers I usually use are available from CSGShop for $75.  CSG sells an M8P receiver for $240 or you can buy a kit with two C94-M8P eval boards from u-blox for $400.  The C94-M8P boards also include on-board radios.

In this post, I will describe how I used RTKLIB and a cell phone hot spot to connect the M8Ps rather than using the internal radios.  I will also compare the RTKLIB solutions for a pair of M8T receivers with the internal u-blox RTK solution for a pair of M8P receivers. I hope to compare the RTKLIB solution to the internal solution for a pair of M8Ps in a future post.  If you are mostly interested in the M8T to M8P comparison results, you can jump directly to the end of this post.

To set up the experiment I first connected both a C94-M8P receiver and a CSG M8T receiver to the GPS survey antenna on my roof using a signal splitter.  I then connected both receivers to a laptop with USB cables.  I configured the M8P using u-center following instructions in the C94-M8P Setup Guide with a few modifications.  First of all, I normally use an NTRIP caster over a cell phone hot spot for my real-time data link so didn’t want to bother with the on-board radios.  I disabled the radios following the instructions in the setup guide.  This involves removing a plastic cover, soldering a wire to a capacitor, drilling a hole in the plastic cover to run the wire, then plugging the other end into a pin on the connector.  It is also possible to do this by connecting an external voltage and ground to the correct pins on the connector, but a simple jumper option would have been a lot more convenient.

The setup instructions are intended to run the setup and solution output messages over the USB port while running the RTCM raw observation messages between receivers over the UART interface.  The UART interface is internally connected to the radios.  Since I am not using the radios, I wanted to run all communication over the USB interface to avoid extra cables.  To do this, I disabled the UART interface and configured the USB interface for NMEA, UBX, and RTCM messages using the UBX-CFG-PRT configure command from u-center.  Following the C94-M8P setup instructions, I then specified a fixed base station location with the UBX-CFG-TMODE3 command and enabled 1005,1077,1087, and 1230 RTCM messages which include the raw observations, base station location, and GLONASS biases.

I setup the base station M8T receiver as I usually do to output the raw observations using the UBX-RXM-RAWX messages.

I then started two copies of the RTKLIB STRSVR app on the base station laptop and streamed both sets of base observations to an NTRIP server as I’ve described in an earlier post using the free RTK2GO.com community NTRIP server.  With this setup, I can receive the NTRIP streams anywhere I can get cell phone coverage, using a cell phone hot spot and a laptop connected to the two rovers and I can test over much longer distances than the radios would allow.

One thing to be aware of is that there are separate versions of the receiver firmware for the M8P base and M8P rover.  I didn’t realize this at first.  It turned out that both of my receivers came loaded with the rover firmware so I spent an embarrassingly long time unsuccessfully trying to configure one of them as a base station.  Once I realized the problem, I was able to fairly quickly download the base firmware from the u-blox website, then use u-center to download the base station firmware to the receiver.

Next, I attached the two rover receivers to a laptop using USB cables and to a single antenna using a signal splitter.  To make the results more comparable to some of my other recent experiments I used the same GPS-500 L1/L2 antenna from SwiftNav that I used for those experiments.  This is a more expensive antenna than generally would be used with these receivers but I wanted to avoid mixing differences between antennas with differences between receivers.

The M8P rover does not need much configuration since it will by default process any incoming RTCM messages as inputs to an RTK solution.  I did configure the USB port  to support NEMA, UBX, and RTCM messages just as I did for the base.  For the static rover test, I left the position message output rate at the default 1 Hz, but increased it to 5 Hz for the dynamic rover test.  I suspect this only affects the message output rate, and not the solution itself since the solution is probably run at a faster internal rate.

The only other setting I modified in the M8P rover receiver was the dynamic model of the navigation mode using the UBX-CFG-NAV5 message.  I set it to “Pedestrian” for my static rover test and “Automotive” for my moving rover test.

I then started another copy of STRSVR on the rover laptop to stream the M8P base station data from the NTRIP server to the M8P rover over the USB COM port.

At this point, getting the solution output messages from the M8P rover is a little tricky.  Only a single user can attach to a Windows COM port at one time and the STRSVR app is not fully bi-directional.  One solution might be to use u-center to stream the NTRIP data and receive the rover messages but I did not verify if this is possible.

Instead, I used a feature of STRSVR that allows you to pipe the data coming from the other direction to a TCP/IP port which can then be processed by either another copy of STRSVR, or plotted with RTKPLOT, or used as an input to RTKNAVI, or all three at once.  Below, on the left, I show how I modified the STRSVR setup that streams the base data from the NTRIP client to the rover by checking the “Output Received Stream to TCP Port” and selecting port 1000.  On the right, I show the STRSVR setup necessary to stream this incoming data from the rover to a file.  This is a convenient work-around for the problem of only being able to connect one user at a time to a COM port.

strsvr2

In my case I configured the rover receiver to send both the M8P internal solution position using NMEA messages and the raw observations using UBX messages so that I could post-process the raw observations later with RTKLIB.  In general though, it is only necessary to send the NMEA messages.  I configured the rover to send GGA and RMC NMEA messages, but others will work as well.

For the M8Ts, I ran a real-time RTKLIB solution using RTKNAVI and my normal kinematic solution configuration for both static and moving rover tests.  As usual, I used the demo5 version of RTKLIB for this test.

So, how did they compare?  First of all, a few differences to consider between the solutions.   The M8P solution only uses GPS and GLONASS satellites since the SBAS and Galileo satellites have been disabled in the M8P firmware as I mentioned earlier.  This should give the M8T solution an advantage.  The M8P also has limited processing power relative to the laptop running the RTKLIB solution, so this may also give the M8T solution an advantage.  On the other hand, I’m sure that u-blox has lots of smart people that know all the internal details of the hardware which should give the M8P solution an advantage.

For my first comparison, I did a a static rover test with the antenna located on a tripod a few meters from the house and partially obstructed by trees.  To avoid different starting conditions between the two receivers, I connected both receivers to the antenna and waited till they both converged to fixed solutions before starting the test.  I then disconnected the antenna for about 30 seconds, and then reconnected it.  This will cause both receivers to lose lock with all satellites and the solutions will have to re-converge from scratch so this should be a fair comparison.  I disconnected and reconnected the antenna several times to compare the times to re-acquire fix and the accuracies of the fixes.

The results are plotted below, the M8P internal solution on the top, and the real-time RTKLIB M8T solution on the bottom.  I re-connected the antenna five times.  Once, the internal M8P solution re-acquired fix slightly quicker than the RTKLIB M8T solution, once the M8T re-acquired fix slightly quicker than the M8P.  The other three times, the M8T re-acquired fix significantly quicker than the M8P.   The M8P times to re-acquire varied from 52 to 220 seconds.  The M8T took from 50 to 76 seconds.

The M8T RTKLIB solutions had no false fixes, the M8P internal solutions had false fixes with several meter errors for over five seconds in two of the five tests.  The false fixes are circled with red in the plot below.

m8p1

The E-W and N-S axes matched within one centimeter between the two solutions.  The U-D axis at first appears to differ by 21.5 meters between the two solutions but this is because I specified the RTKLIB vertical solution to be relative to the ellipsoid, while the M8P solution is relative to the geoid.  If I subtract the geoid to ellipsoid offset (available in the GGA message from the M8P), then the two solutions match in the vertical axis as well.

Overall, I would have to say that in this particular run,  the RTKLIB M8T solution out-performed the M8P internal solution by a fairly significant amount.

One other observation about the M8P solution is that the precision reported (at least in the GGA message) is only to one centimeter in the horizontal axes and 10 cm in the vertical accuracy.  I didn’t see any obvious way to get outputs with better precision than this, at least with the NMEA messages.  [Note 8/16/18: It turns out that the UBX-CFG-NMEA message can be used to enable high precision mode to fix this.  Thanks to Charles Wang for pointing this out in a comment]

Next I mounted the same antenna on the roof of my car and drove around the neighborhood.  The sky view varied from nearly unobstructed to moderate tree canopy.  Here are the results of that test, the MP internal RTK solution on the left, and the M8T real-time RTKLIB solution on the right.

m8p2

The fix rate for the M8P solution was 52.4%.  The M8T solution was noticeably better with a 95.2% fix rate.

Here’s a plot of the difference between the two solutions (green indicates both solution are fixed, yellow indicates that at least one is float).  The U-D axis differences are fairly large, mostly because of the 10 cm vertical precision of the M8P position messages.  There is also a fairly large discrepancy between the two solutions around 22:55:00.  This corresponds to a 15 second gap in base station data, probably due to a loss of cell phone reception.  This seems to be a large error for such a short base outage but I was not able to narrow down the cause beyond this or determine which receiver contributed more to this combined error.  This probably deserves more investigation in a future post.

m8p2d

Again, the real-time RTKLIB M8T solution appeared in this test to be better than the internal M8P solution.

The most likely reason the M8T is getting a better solution is that it is using more satellites.  Here are the rover raw observations for the M8P on the left, and the M8T on the right.  With the GPS and GLONASS constellations, both receivers are seeing 13 satellites.  With Galileo and SBAS, the M8T is seeing an additional 7 satellites.  One of these Galileo satellites is not fully operational yet and so is not used in the solution, but that still gives 6 extra satellites, nearly 50% more than the M8P solution.

m8p3

If the extra satellites are what is improving the M8T solution, then running a solution with the M8T data without the SBAS and Galileo satellites should give a solution similar to the M8P solution.

This is easy to do, so let’s try it.  Here’s a post-processed solution for the M8T data using only the GPS and GLONASS satellites.

m8p4

Fix rate is 66.5%, reasonably close to the 52.4% fix rate from the M8P internal solution.  This suggests that a large part of the improvement for the M8T solution is coming from the additional satellites and not from any differences in the solution algorithms.

As in all my comparisons, this is intended to be one user’s initial experience, and not any kind of rigorous comparison test.  Please interpret these results with that in mind.  If you have experiences that differ from these I would be very interested to hear about them.

Advertisements

Glonass Ambiguity Resolution with RTKLIB Revisited

To get a high precision fixed solution in RTKLIB we need to resolve the integer ambiguities that come from the carrier phase measurements.  Resolving the integer ambiguities for the GLONASS satellites is more challenging than resolving them for the other constellations.  This is because, unlike the other constellations, the GLONASS satellites all transmit on slightly different frequencies.  This introduces an additional bias error in the receiver hardware.

These hardware biases are constant, generally the same for all receivers from the same manufacturer, are proportional to carrier frequency and are similar between L1 and L2.

In the demo5 version of RTKLIB, there are four choices for how to handle GLONASS ambiguity resolution (AR). I will cover all four briefly, but then focus on the “autocal” setting which I have enhanced in the most recent version (b29c) of the demo5 code.

Off:  If Glonass  AR is set to “Off”, then the raw measurements from the Glonass satellites will be used for the float solution but ambiguity resolution will be done only with satellites from the other constellations.  If you are not using the demo5 version of RTKLIB, this is usually your only choice when using receivers from different manufacturers for the rover and the base.  However, you are giving up a significant amount of information by ignoring the GLONASS ambiguities and so I would not recommend this setting if you are using the demo5 code, unless of course your receivers don’t support the Glonass satellites.

On:  If Glonass AR is set to “On” then RTKLIB will treat the Glonass ambiguities the same as the ambiguities from the other constellations and will not make any attempt to account for the additional hardware biases.  Use this setting if your base and rover receivers are from the same manufacturer, since in this case, the biases will cancel and can be ignored.  There are also some cases in which different manufacturers have equal or nearly equal biases as we will see later, in which case you can also use this setting.  This is your best solution for dealing with Glonass ambiguities.  I always try to use matched receivers for base and rover if possible.

Fix-and-Hold: This is an option I have added to the demo5 code for Glonass AR.  It is an extension to the “fix-and-hold” method used for other constellations but instead of using the additional feedback to track the ambiguities, it uses it to null out the hardware biases.  I recommend this setting when using the demo5 code with unmatched receivers.  It takes advantage of the additional information in the Glonass ambiguites most of the time.  However, fix-and-hold is not enabled until after a first fix has been achieved, and so the Glonass ambiguities are not available until then.  This can mean longer time to first fix and less robustness compared to the “On” option, so don’t use this option for matched receivers.

Auto-cal:  This option adds additional states to the kalman filter to estimate the receiver hardware biases as a function of carrier frequency, one state for L1, another for L2.  In previous experiments I had not had any success with it.  Recently, however, I discovered that if I adjusted the filter settings, it can be effective for a zero baseline case, where base and rover are both connected to the same antenna so that almost all other errors are completely cancelled.  With a little more experimentation I also found that for short baselines it can also be effective if the kalman filter state is pre-set to something close to the final value before the solution is started.  It will then usually converge to the correct bias value.  However, there is currently no mechanism in the code to adjust any of these values, so I have not found this mode to be useful in its current implementation.

To make the auto-cal option more flexible, and hopefully more useful, I made a few modifications to it in the b29c code.  I added the capability to pre-set the initial state value and also to adjust the internal filter settings, specifically the initial variance and process noise for this state.  The units for the state, and hence for the initial value are in meters per frequency channel and values generally are within +/-5 cm per channel.  I used some existing config parameters that are currently unused to reduce the amount of code I needed to change.  Unfortunately it means that the names are not as descriptive as they could be.  The new config parameters are:

pos2-arthres2 = relative GLONASS hardware bias in meters per frequency slot
pos2-arthres3 = initial variance of GLONASS hardware bias states
pos-arthres4 = process noise for GLONASS hardware bias states

Bias values have been published for some of the most popular geodetic quality receivers but are generally not available for lower-cost or less popular receivers.  Here is a table of values from a paper published by Lambert Wanninger in 2011 for nine receiver manufacturers.

biases

I was able to verify these results for Trimble, Leica, and Novatel, but I found a much lower value for Septentrio so I suspect the biases may have changed in their newer receivers.

To demonstrate the modified autocal option, I will start with a zero baseline case between a ComNav receiver and a Tersus receiver.  It is easiest to measure the hardware biases in the zero baseline case because most other errors will cancel and the hardware biases will be the dominant error.  In this case, I have significantly reduced the initial variance setting from the original value of 1.0 to 1E-7 and increased the process noise from 1E-6 to 1E-3.

I have run the solution several times with the initial bias value set to different numbers between -.05 and 0.06.  Here are the results for both L1 and L2.

biases1

The convergence occurs just after first fix is achieved.  If a fix is not achieved, then the state will not converge as you can see above for the 0.06 example.   In this case, the initial value was too far from the correct value and prevented getting a fix.  As you can see, all the other cases converged towards a single value around -.022, both for L1 and for L2.

Another way to visualize the error in the initial value is to look at the GLONASS residuals after first fix is achieved.  The plot below shows the GLONASS L1 carrier phase residuals  for different initial values, for 0.03 on the left, -0.05 in the middle, and what I believe is the correct value for this receiver combination of -.022 on the right.

 

acal1

Here are the same plots for the L2 carrier phase residuals.

acal2

Through a slightly tedious process, I am fairly easily able to iterate the residuals down to near zero for different pairs of receivers in my possession.  Note that this gives me the relative difference in biases for each receiver pair, and not absolute values for each receiver, unlike Wanniger’s table which is for absolute biases.

Extending the table to receivers used in nearby CORS stations is a little more challenging because the initial bias value needs to be fairly close to get a first fix and hence a convergence, but still possible if the base station is not too distant.   I found data sets that included CORS data from Leica, Novatel, Trimble, and Septentrio receivers.  Using the above procedure to iterate the residuals down to near zero, I was then able to extend my table and make the values absolute by choosing the unknown offset to make my bias pairs align with Wanninger’s table.  This is the resulting table I created.

ComNav    =   2.3 cm
Leica          =   2.3 cm
Novatel      =  2.3 cm
Septentrio = -0.3 cm
SwiftNav   = -0.2 cm
Tersus        = -0.1 cm
Trimble      = -0.7 cm
u-blox         = -3.2 cm 

To generate an initial value for the bias state from this table for an RTKLIB solution, subtract the base station bias from the rover bias, then divide by 100 to convert from centimeters to meters.  This value can then be used to set the “pos2-arthres2” config parameter in the config file.  For the RTKPOST and RTKNAVI GUI option menus I have labeled this “Glo HW Bias”.

To test this code on an independent set of data after generating the table, I used a data set recently sent to me by a reader.  It consists of a u-blox  M8T receiver for rover and Leica receiver just a few kilometers away for base, and was collected in Europe.  The rover position was static but I ran the solution in kinematic mode to make the solution a little more challenging and to make any errors in the solution more visible.

To generate the correct config value for RTKLIB I  subtracted the Leica bias of 2.3 cm from the above table from the u-blox bias of -3.2 cm to get a relative bias between receivers of -5.5 cm or -0.055 m.  I added “pos2-arthres2=-0.055” to the config file and then ran the solution four times, with pos2-gloarmode set to “off”,”fix-and-hold”,”autocal”, and “on”.  Although I left the bias value set for all runs it is ignored unless gloarmode is set to autocal.

Here are the times to first fix, the number of satellite pairs used for the initial fix, and the number of satellite pairs being used for fix after 10 minutes.

  Time to # sat pairs used # sat pairs used for
GLO AR mode first fix for initial fix fix after 10 min
OFF 4:10 7 7
Fix&Hold 4:10 7 11
Autocal 1:05 14 14
On 6:47 14 14

As you would expect, the time to first fix for gloarmode=”off” was the same as “fix-and-hold” since “fix-and-hold” does not use the GLONASS satellites for initial fix.  After 10 minutes it was still only including four of the GLONASS satellites in the ambiguity resolution which was a little unusual, typically I would have expected more GLONASS satellites to be included.

With gloarmode=”autocal”, the time to first fix was reduced from 250 seconds to 65 seconds and the number of satellites included in the first fix increased from 7 to 14., both significant improvements.

The most surprising thing in this data is that when gloarmode was set to “on” it acquired a fix at all.  In many similar cases it will never get a fix.  The GLONASS carrier phase residuals after initial fix were very high though as can be seen below.  The left plot is with gloarmode set to “on”, and the right plot is with it set to “autocal”.

biases3

The ambiguity resolution ratio was also much higher when autocal was enabled as can be seen below (yellow/green=autocal, olive/blue=on) which improves robustness.

biases2

The large residuals did not affect the solution position, as the two solution did not differ by more than 2 mm at any time.  The autocal solution however is much more robust in the sense that it is less likely to lose fix.

Although I have found the results with autocal enabled are generally excellent with relatively short baselines (<10 km), I have found the results less encouraging for longer baselines (>25 km).  In these case I have found that I often get better results with pos-gloarmode set to “fix-and-hold” then I do with “autocal”.  I don’t understand exactly why this is, but suspect that the fix-and-hold correction is more general and may be correcting for more than just the GLONASS hardware biases.

The code changes for this feature are included both in my Github repository and in the newest (demo5 b29c) executables available to download from the rtkexplorer website.   If you choose to experiment with this feature, please let me know if you find any errors in my table, or can add values for any additional receivers.

[Note 6/17/18:  I had a issue with uploading the executables to the website.   If you downloaded them prior to 6/17/18, please download again to get the updated version.] 

Swiftnav experiment: Improvements to the SNR

In my previous couple of posts, I evaluated the performance of a pair of dual freqeuncy SwiftNav Piksi multi receivers in a moving rover with local base scenario.  I used a pair of single frequency u-blox M8T receivers fed with the same antenna signals as a baseline reference.

It was pointed out to me that the signal to noise ratio (SNR) measurements of the rovers were noticeably lower than the bases, especially the L2 measurements and that this might be affecting the validity of the comparison.  This seemed to be a valid concern so I spent some time digging into this discrepancy and did indeed find some issues.  I will describe the issues as well as the process of tracking them down since I think it could be a useful exercise for any RTK/PPK user to potentially improve their signal quality.

Previously , in another post, I described a somewhat similar exercise tracking down some signal quality issues caused by EMI from the motor controllers on a drone.  In that case, though, the degradation was more severe and I was able to track it down by monitoring cycle slips.  In this case, the degradation is more subtle and does not directly show up in the cycle slips.

Every raw observation from the receiver generally includes a signal strength measurement as well as pseudorange and carrier phase measurements.  The SwiftNav and u-blox receivers both actually report carrier to noise density ratio (C/NO), rather than signal to noise ratio (SNR) but both are measures of signal strength.  They are labelled as SNR in the RTKLIB output, so to avoid confusion I will refer to them as SNR as well.  I will only be using them to compare relative values so the difference isn’t important for this exercise, but for anyone interested, there is a good explanation of the difference between them here.  Both are logarithmic values measured in dB or dB-Hz so 6 dB represents a factor of two in signal strength.

Since the base and rover have very similar configurations we would expect similar SNR numbers between the two, at least when the rover antenna is not obstructed by trees or other objects.  I selected an interval of a few minutes when the rover was on the open highway and plotted SNR by receiver and frequency for base and rover.  Here are the results, base on the left and rover on the right.  The Swift L1 is on the top, L2 in the middle, and the u-blox L1 on the bottom.  To avoid too much clutter on the plots, I show only the GLONASS SNR values, but the other constellations look similar.

snr1

Notice that the L1 SNR for both rovers is at least 6 dB (a factor of 2) lower than the base, and the Swift L2 SNR is more like 10 dB lower.  These are significant enough losses in the rover to possibly affect the quality of the measurement.

The next step was to try and isolate where the losses were coming from.  I set up the receiver configurations as before and used the “Obs Data” selection in the “RTK Monitor” window in RTKNAVI to monitor the SNR values in real time for both base and rover as well as the C/NO tracking window in the Swift console app.  I then started changing the configuration to see if the SNR values changed.  The base and rover antennas were similar but not identical so I first swapped out the rover antenna but this did not make a difference.  I then moved the rover antenna off of the car roof and onto a nearby tripod to see if the large ground plane (car roof) was affecting the antenna but this also did not make a difference.  I then removed the antenna splitter, but again no change.

Next, I started modifying the cable configuration between the receivers and my laptop.  To conveniently be able to both collect solution data and be able to collect and run a real-time solution on the raw Swift observations, I have been connecting both a USB serial cable and an ethernet cable between the Swift board and my laptop.  My laptop is an ultra-slim model and uses an etherent->USB adapter cable to avoid the need for a high profile ethernet connector.  So, with two receivers and my wireless mouse, I end up having more USB cables than USB ports on my computer and had to plug some into a USB hub that was then plugged into my laptop.

The first change in SNR occured when I unplugged the ethernet cable from the laptop and plugged it into the USB hub.  This didn’t affect the L1 measurements much but caused the Swift L2 SNR to drop another 10 dB!  Wrong direction, but at least I had a clue here.

By moving all of the data streams between Swift receiver and laptop (base data to Swift, raw data to laptop, internal solution to laptop) over to the ethernet connection I was able to eliminate one USB serial port cable.  This was enough to eliminate the USB hub entirely and plug both the USB serial cable from the u-blox receiver and the ethernet->USB cable from the Swift receiver directly into the laptop.  I also plugged the two cables into opposite sides of the laptop and wrapped the ethernet->USB adapter with aluminum foil which may have improved things slightly more.

Here is the same plot as above after the changes to the cabling from a drive around the neighborhood.

snr2

I wasn’t able to eliminate the differences entirely, but the results are much closer now.  The biggest difference now between the base configuration and the rover configuration is that I am using a USB serial cable for the base, and a ethernet->USB adapter cable for the rover so I suspect that cable is still generating some interference and that is causing the remaining signal loss in the rover.  Unfortunately I can not run all three streams I need for this experiment over the serial cable, so I am not able to get rid of the ethernet cable.

I did two driving tests with the new configuration, similar to the ones I described in the previous posts.   One was through the city of Boulder and again included going underneath underpasses and a parking garage.  The second run was through the older and more challenging residential neighborhood.  Both runs looked pretty good, a little better than the previous runs but it is not really fair to compare run to run since the satellite geometry and atmospheric conditions will be different between runs.  The relative solutions between Swift and u-blox didn’t change much though, which is probably expected since the cable changes improved both rovers by fairly similar amounts.

Here’s a quick summary of the fix rates for the two runs.  The fix rates for the residential neighborhood look a little low relative to last time but in this run I only included the most difficult neighborhood so it was a more challenging run than last time.

Fix rates

City/highway Residential
Swift internal RTK 93.60% 67.50%
Swift RTKLIB PPK 93.70% 87.90%
U-blox RTKLIB RTK 95.70% 92.80%
U-blox RTKLIB PPK 96.10% 91.10%

Here are the city/highway runs,  real-time on the top, post-process on the bottom with Swift on the left and u-blox on the right.  For the most part all solutions had near 100% fix except when recovering from going underneath the overpasses and parking garage.

snr4

Here are the same sequence of solutions for the older residential neighborhood.  This was more challenging than the city driving because of the overhanging trees and caused some amount of loss of fix in all solutions.

snr5

Here’s the same images of the recovery after driving under an underpass and underneath a parking garage that I showed in the previous post.  Again, the relative differences between Swift and u-blox didn’t change much, although the Swift may have improved a little.

snr1

Overall, the improvements from better SNR were incremental rather than dramatic, but still important for maximizing the robustness of the solutions.  This exercise of comparing base SNR to rover SNR and tracking down any discrepancies could be a useful exercise for anyone trying to improve their RTK or PPK results.

Underpasses and urban canyons

[Update: 4/17/18:  Although I don’t think it changes the results of this experiment significantly, there was an issue with apparent interference from a USB hub and ethernet cable on the rover setup during this testing.  See the next post for more details. ]

In my last post I demonstrated fairly similar fix rates and accuracies between an M8T single-frequency  four-constellation solution and a SwiftNav Piksi dual-frequency two-constellation solution.

One advantage often mentioned for dual frequency solutions for moving rovers is that their faster acquisition times should help when fix is lost due to a complete outage of satellite view caused by an underpass or other obstruction.  This makes sense since the dual frequency measurements should allow the ambiguities to be resolved again more quickly.

Since my last data set included several of these types of obstructions I thought it would be interesting to compare performance specifically for these cases.

To create the Google Earth images below I used the RTKLIB application POS2KML to translate the solution files to KML format files and then opened them with Google Earth.

Here are the raw observations for the first underpass I went under, Swift rover on the left, M8T rover on the right.  In this case there was an overhead sign just before the underpass which caused a momentary outage on all satellites followed by about a two second outage from the underpass, followed by a period of half cycle ambiguity as the receivers re-locked to the carrier phases.

upass2

Here’s the internal Swift solution for the sign/underpass combo above at the top of the photo and a second underpass at the bottom of the photo.  For the first underpass, the solution is lost at the sign, achieves a float solution (yellow) after about 9 seconds, then re-fixes (green) after 35 seconds.

upass5

Here’s the RTKLIB post-processed solution (forward only) for the Swift receivers with fix-and-hold low tracking gain enabled as described in my previous post.  It looks like a small improvement for both underpasses.  The solution loses fix at the sign but in this case maintains a float solution until the underpass.

upass6

Here’s the RTKLIB post-processed solution (same config) for the M8T receivers.  Notice the no-solution gaps after the underpasses are shorter.  In this case, for the upper underpass, a solid fix was re-achieved after about 21 sec.

upass7

Here’s a zoom in of the M8T solution (yellow dots) for the lower underpass.  If the position were being used for lane management it looks like the float solution would probably be accurate enough for this.  The other yellow line with no dots is the gap in the Swift solution.

upass8

Here’s a little further down the road.  At this point the Swift solution achieves a float position at about the same time the M8T solution switches to fix.  Lane management would clearly be more difficult with the initial Swift float solution.

upass9

Next, I’ll show a few images from another underpass.  In this case I drove under the underpass from the left, turned around, then drove under the underpass again from the right.  The Swift internal solution is on the left, the Swift RTKLIB solution in the middle, and the M8T RTKLIB solution on the right.  Notice that the time to re-acquire a fix is fairly similar in all three cases.

upass1

Here is zoom in of the two Swift solutions, they are quite similar.

upass3

Here is a zoom-in of the M8T RTKLIB solution.  Again, the float solution is achieved very quickly, and appears to be accurate enough for lane management.

upass4

My last test case was a combination urban canyon and parking structure.  In the photo below, I drove off the main street to the back of the parking garage, underneath the pedestrian walkway, into the back corner, then underneath the back end of the garage and then back to the main street.  I would consider this a quite challenging case for any receiver.

ucanyon1

Here are the raw observations.

ucanyon0

Here are the three solutions, again the Swift internal is on the left, the Swift RTKLIB in the center, and the M8T RTKLIB on the right.

ucanyon1

 

Here is an image of the Swift internal solution.

ucanyon4

Here is an image of the Swift RTKLIB solution

ucanyon3

And here is an image of the M8T RTKLIB solution

ucanyon2

In this case, the M8T RTKLIB solution appears to be the best.

So, this experiment seems to show that a dual frequency solution will not always handle satellite outages better than single frequency solutions.  In this case, the extra Galileo and SBAS satellites in the M8T solution seem to have helped a fair bit, and the M8T solution is, at least to me, surprisingly good.

If anyone is interested in analyzing this data further, I have uploaded the raw data, real-time solutions, and config files for the post-processed solutions to the sample data sets on my website, available here.  I should mention that there is an unexplained outage in the Swift base station data near the end of the data set.  This could have been caused by many things, most of them unrelated to the Swift receiver, so all the analysis in both this post and the previous post were done only for the data before the outage.

 

 

 

 

 

 

 

 

 

 

Measuring a survey marker with the DataGNSS D302-RTK

 

In my last post I reviewed the D302-RTK receiver from DataGNSS.

datagnss1

Since the unit is designed for surveying, I thought it would be fun to use it to actually try and measure an actual survey marker, something I’ve never done before.

So, to find an official marker I went to the NGSDataExplorer website from the U.S. National Geodetic Survey group and brought up an easy-to-use interactive map of the local area with different types of survey markers marked.   Here’s a zoom in of the map showing a nearby marker I found with easy access and a decent sky view.

survey3

Clicking on “Datasheet” brings up the following information about the marker.

survey2

About halfway down, you can see that the NAD83 (2011) coordinates for this site are:

latitude               =   40 05 14.86880 N
longitude           = 105 09 01.68689 W
ellipsoidal height = 1613.737 meters

So let’s see how close we get to these numbers with the D302-RTK.  The D302-RTK uses a u-blox M8T receiver and a version of the demo5 RTKLIB code, so any results from this experiment should be valid not just for the D302-RTK, but for any M8T/RTKLIB based solution.

The survey marker is just over 1 km from my house and so I will use the antenna mounted on my roof connected to another M8T receiver as the base station.  The base observations are then broadcast to the internet with an RTK2GO.com NTRIP caster as I described in this post.  Using an M8T receiver for the base station allows me to enable GLONASS ambiguity resolution in the solution since both receivers are using identical hardware.

Since I am doing a real-time solution, I need internet service at the site of the measurement to get the base observations.  I got this by enabling a hot spot on my cell phone and connecting the D302-RTK to that.  I have surveyed the base station antenna with both RTK solutions from nearby CORS stations as well as online PPP solutions so I have a fairly good idea of it’s location, maybe within a cm or so.

The clamp and surveying pole in the top photo look nice but I don’t own anything like that, so instead I used a $20 camera tripod, a piece of wood, two rubber bands, a wood clamp, a piece of string, and a carriage bolt.  Here is the resulting setup aligned over the survey marker.  Note that I am using the optional helix antenna which was included with the receiver that was sent to me rather than an external antenna.

survey1

Here’s a close-up of the marker itself with the carriage bolt and string aligned over the top of the marker pin.   The other end of the string is fastened directly underneath the D302-RTK, making it easy to align the receiver directly over the marker and measure the vertical distance between the two.    If you’re wondering about the film canister at the bottom of the well, it seems that the marker is also serving as a local geocache location.

survey4

At this point everything is ready to go.  I powered on the receiver, enabled the RTK service for a static solution, waited five minutes and then recorded the position.  I did this three times to get three different measurements.  Here are some not-so-good photos from the screen of the D302-RTK for the three runs.  It is possible to store these values to the unit using an additional Android app, then uploading with a USB cable later, but in my case I just copied the values manually to my computer.

datagnss_screen123

The numbers are a little hard to read but are very consistent between results so I’ll just use the middle values.  The indicated height is the height of the base of the antenna, not the survey marker so I need to subtract the difference which I got by adding the length of string and bolt to the height of the receiver, in this case 1.49 meters.

latitude              =   40 05 14.8870 N
longitude           = 105 09 01.7374 W
ellipsoidal height = 1614.393 – 1.49 = 1612.903 meters

Comparing to the published survey numbers, we are not very close.  It’s a little less intuitive to compare degrees of latitude and longitude but the height is obviously off by almost a meter, so something is wrong.

Usually this kind of large error suggests there may be a mismatch between coordinates.  Sure enough, when I double-checked the base location that I had entered into the D302-RTK, I realized that I had used WGS84 coordinates, not NAD83.  As I mentioned earlier I had computed the base position using both RTK solutions from nearby CORS stations as well as online PPP solutions.  RTK solutions are always relative to the base station so will be in the same coordinate system as the base.  CORS stations locations in the U.S are normally specified in NAD 83 so using those coordinates would have been fine, but instead I had used coordinates from the online PPP solution which were in WGS-84 coordinates.  Since the RTK solution will be in the units of the base station, my results are in WGS-84 coordinates, and the published survey coordinates are in NAD83.

Fortunately there is another easy-to-use tool from the U.S. National Geodetic Survey that will translate from one coordinate system to another.  Entering the WGS84 coordinates from the measurement and translating to NAD83 gave the following:

survey3

You can see the WGS-84 coordinates on the left and the NAD83 translations on the right.  The translated coordinates from above are:

latitude                =   40 05 14.86828 N
longitude             = 105 09 01.68627 W
ellipsoidal height = 1614.393 – 1.49 = 1613.759 meters

Compare these to the marker coordinates we got from the NGS website earlier:

latitude              =   40 05 14.86880 N
longitude           = 105 09 01.68689 W
ellipsoidal height = 1613.737 meters

That’s looking much better.  The heights now differ by only about 2 cm, well within the expected vertical accuracy given the fact that my base station location was not exact, both antennas were uncalibrated, and my tripod setup was a little imprecise.

How about latitude and longitude?  The error in latitude is 0.00052 arc seconds and in longitude is 0.00062 arc seconds.  Those seem small, but unless you are a professional surveyor these numbers are probably not very meaningful to you, they certainly are not to me.

Meters would be much easier to interpret.  I usually use a free matlab geodetic toolbox, available on the Matlab file exchange to do these sorts of translations, but this time, let’s do it by hand.

At the equator, one arc second of longitude or latitude is approximately equal to the circumference of the earth divided by 360 degrees, then by 60 min/degree, then 60 sec/min.  A quick google search finds that the equatorial circumference of the earth is about 40.07 million meters.  Divide that by 360*60*60 gives 30.92 meters per arc second.  This is not exact but good enough for our exercise.  Arc-seconds of latitude remain nearly constant with location but arc-seconds of longitude will get smaller as we approach the poles of the earth by the cosine of the latitude.  I am located at approximately 40 degrees latitude, so 30.92*cos(40 degrees)=23.68 meters per arc second of longitude.

The error in latitude in our measurement from above was 0.0052 arc seconds.  Multiply this by 30.92 meters/arcsec gives an error in meters of 0.016 or 1.6 cm.  For longitude, multiplying 0.000062*23.68 meters/arcsec gives 0.0015 meters or 0.15 cm.

So total error in this exercise was 2.2 cm of vertical error and 1.6 cm of combined horizontal error.  Not too bad for two rubber bands, a piece of string, a bolt, and a wood clamp (as well as a nice low-cost receiver)!

Although I used the D302-RTK for this experiment, I believe the results would be very similar for any solution using a pair of M8T receivers and RTKLIB.

A look at the new DataGNSS D302-RTK single frequency receiver

DataGNSS recently introduced an Android-based single frequency GNSS receiver and gave me one of their units for evaluation.  It is based on the u-blox M8T and RTKLIB and appears to be aimed primarily at the surveying market.  Here’s a photo of it from their website.

datagnss1

First of all, let me say this is the most user-friendly receiver I have worked with.  From opening the box to getting a first RTK fix took less than half an hour.  If it were not for a bug in the user interface that they have since fixed, I would have had it working even quicker.  They have found a good balance between being easy-to-use and being configurable. Several of the user options are directly from my demo5 version of RTKLIB and it has an acknowledgment on the About screen for “Rtklibexplorer” so  it looks like they are using at least a derivative of the demo5 code.

I won’t go over all the technical details, you can read the datasheet here.  I also did not do any sort of extensive testing of the unit.  I will just give my initial impressions of playing around with for a short time.

To get my initial RTK fix, I first needed to connect the unit to wireless and then configure it to use the NTRIP caster on my roof as base station.  Both tasks were very easy and intuitive, an advantage of having a built in screen and leveraging off the Android platform.  Once I had done this, I turned on the default RTK service, and had a fix in just a few minutes.

The D302-RTK can be used either with an external antenna or with an optional helix antenna.  This photo from the DataGNSS website shows the helix antenna attached to the top of the unit.  This can be quite convenient.  Helix antennas are more omni-directional than patch antennas so are considered a good choice for handheld applications and are often used without ground planes as in this case.

d302

The lack of a ground plane does concern me though for RTK measurements.  In their antenna application notes, u-blox has some good information about the advantages and disadvantages of patch vs helix antennas including the following passage:

ant_text1That said, in my limited testing I used the helix antenna exclusively and seemed to be getting reliable fixes as quickly as I usually do with patch antennas.  However, I was not in an environment that would have particularly large amounts of multipath.  Of course, it is also very easy to switch to using an external antenna if multipath is an issue.

The RTK solution defaults to using GPS and GLONASS but it is easy to add Galileo or Beidou.  Configuring the basic settings is quite intuitive.  The advanced settings are much less intuitive and unfortunately not documented.  In most cases they are labelled with the same names as in RTKLIB and so can be just as cryptic.

As far as performance goes, since I believe it is using the demo5 version of RTKLIB, I would expect similar performance as you would get with any M8T receiver and the demo5 code, and my limited testing was consistent with that.

The biggest disadvantage I found was that there doesn’t appear to be an easy way to transfer solutions from the unit to a computer or other device for post-processing.  I found myself taking photos of the screen to record locations.  It does come with the SuperSurv application pre-loaded which will record tracks but I could not find an easy way to then download these to my computer.  I imagine this is something they are working on and hopefully will improve soon.  Or it may be already available in the SuperSurv app and just not obvious enough for me to find.  It may also be that there is another Android app that can be downloaded to serve this purpose.

It did have a few other issues that made it feel like an early production unit which I am fairly sure is the case, presumably they will get these issues ironed out over time.  In particular I found power management a problem, not when using the unit, but after you are done, it requires separate steps from different menus to stop the RTK solution and then power down the RTK module.  Even after doing both, it seemed like the battery ran out quite quickly when not being used.  I also experienced annoying screen flicker at times, at other times it was fine.  In the first version they sent me there were also several minor menu issues which when I brought them to their attention, they fixed quite quickly and sent me a new version of firmware.  My expectation is that they are working on the other issues as well and will hopefully have improvements soon.

The list price for the unit is $1199.  This is a lot for a hobbyist or casual user but if it meets the needs of anyone using it in their work on a regular basis, I imagine it could be quite affordable, especially compared to other such easy to use options.

In my next post, I will describe my experience using the D302-RTK to measure an actual survey marker, something I’ve never done before.

[Update 3/17/18:   DataGNSS responded to my post with a couple of updates:  First of all, the SuperSurv recorded tracks are available in the SuperSurv folder on the D302-RTK and can be downloaded by connecting a USB cable between the PC and the receiver.  They also suggest MapItGIS as another Android app that can be downloaded to the D302-RTK for recording tracks.  Also, they have just released a new version of firmware which improves the power management during sleep mode ]

Post-processing ComNav receiver data with RTKLIB – a more in-depth look

In a couple of my recent posts,  I showed that with the latest firmware from ComNav, the internal RTK solution was very good even for a quite challenging moving rover test, significantly better than a post-processed solution using RTKLIB.   To recap, here are the results side by side, ComNav internal RTK solution on the left, demo5 RTKLIB post-processed solution on the right.  The internal ComNav solution had a 96% fix rate while the RTKLIB solution had only a 68% fix rate.  In the plots below of a drive around the residential streets near my house, green represents a fixed solution and yellow is a float solution.

comnav2_1

The RTKLIB solution shown is a forward solution to make it a more direct comparison to the internal solution.  Re-running it as a combined solution helps a little, but still only increased the fix rate to 71%.  Fix rates are only meaningful if the number of false fixes is small, but as I showed previously, the two solutions match quite closely where both are fixed.

Not only is the ComNav RTKLIB solution inferior to the internal ComNav solution, it is also inferior to an RTKLIB solution from a pair of much lower cost single frequency u-blox M8T receivers fed with the same antenna signals as the ComNav receivers.  As I showed in my last post, the RTKLIB M8T solution had an 88% fix rate and again matched the ComNav internal solution very closely when both were fixed.

So why does RTKLIB struggle so much with the ComNav data?  My suspicion is that there is nothing inherently lower quality about the data, it’s just that it’s flaws are different from the flaws of the u-blox data and that RTKLIB, particularly the demo5 version, has evolved to handle the flaws of the u-blox data better than it has for the ComNav data.  Specifically, what I see, is that the u-blox receiver is much better at flagging lower quality observations than ComNav (or other receivers) are.  This puts the burden on the solution code to appropriately handle the lower quality observations, something RTKLIB is not particularly good at.

To test this theory, I ran an experiment with a modified version of RTKLIB.  Usually I like to use the demo5 code for my experiments since it’s available to everyone, but for this exercise I used some experimental code I have been working on for a while now that helps RTKLIB to handle a wider range of measurement quality.  This code doesn’t do anything fundamentally different from the current demo5 version of RTKLIB.  No wide-lane or ionospheric-free linear combinations or any other tricks that are only available with dual frequency measurements.  All it does different is a better job of rejecting or de-weighting lower quality measurements and a more comprehensive search of the integer ambiguity space to find a clean set of ambiguities to use for a fix.

My thinking is that if RTKLIB code with just these changes can match the internal ComNav solution then that would give me confidence that the core mathematical algorithms in RTKLIB are fundamentally sound and capable of handling dual frequency solutions.   If not, then maybe RTKLIB requires some more significant changes to the solution algorithms to take better advantage of the dual frequency measurements.

When I started working with RTKLIB a couple of years ago I found similar performance issues when processing u-blox solutions, and found that relatively minor code changes could made a big difference.  RTKLIB is a fantastic resource but sometimes it is more of an academic toolbox than a true engineering solution.  This is not surprising or unexpected, given that the developers are in fact using it for academic research.

So how did the modified code work?  Here’s a forward solution with the modified RTKLIB code on the left and the difference between this solution and the ComNav internal solution on the right.  The 16.27 meter difference in the vertical axis is due to different handling of the offset between geoid and ellipsoid as I described in my last post.

comnav2_2

You can see a big improvement.  The fix rate has increased from 68% to 99% and the vast majority of the differences between the two solutions are still less than 2 cm in the horizontal axes and 4 cm in the vertical axis.  Again, these are combined errors of both solutions, so I consider these numbers quite good.

Of course, the code improvements will affect the M8T single frequency solution too, so I also re-ran that solution to complete the comparison.  And just to make things a little more interesting, I also re-ran the ComNav solution with the modified code using only the L1 observations.  I did this to try and separate how much benefit is coming from the higher cost receiver in general and how much is coming from the L2 measurements specifically.

Here’s the M8T solution on the left with a  94.4% fix rate, and the Comnav L1-only solution on the right with a 98.9% fix rate.  The most challenging measurement environment was in the older neighborhood with larger trees that shows up as the small squares in the far left.  You can see that the ComNav L1 solution is noticeably better than the u-blox M8T solution in this area.

comnav2_3

The two receivers did not start collecting data at exactly the same time so I did not include the time to first fix in this comparison.  If I remove that from the ComNav L1/L2 solution above, that increases the fix rate in that solution from 99.0% to 99.1%, slightly better than the 98.9% achieved by the L1 only solution.  The differences in both solutions relative to the internal ComNav solution appeared similar to the errors plotted above for the RTKLIB L1/L2 solution.

So, switching from the u-blox receiver to the ComNav receiver but using only L1 for both solutions improved the fix rate from 94.4% to 98.9% .  Adding L2 to the ComNav solution then increased the fix rate from 98.9% to 99.1%.

This data set is the most challenging I’ve run to date, and so I consider all of these results as quite good.  To provide some level of calibration, here are the observations for the ComNav rover.  You can see there are a significant number of cycle slips and missing data.  There are even more in the M8T observations, but the M8T data does include the Galileo and SBAS satellites as well.

comnav2_4

I’ve covered several different results very quickly, so here is a quick summary of all the experiments.

ComNav internal solution:                              Fix rate = 96%
ComNav demo5 RTKLIB L1/L2 solution:      Fix rate = 68%
M8T demo5 RTKLIB L1 solution:                   Fix rate = 88%

ComNav modified RTKLIB L1/L2 solution:  Fix rate=99.1%
ComNav modified RTKLIB L1 solution:        Fix rate=98.9%
M8T modified RTKLIB L1 solution:               Fix rate=94.4%

So what can you conclude from this experiment?  This is what I get from it:

1) The existing core RTKLIB  algorithms are capable of high quality dual frequency solutions if the flaws in the observations are properly handled.

2) The current demo5 RTKLIB code is better matched to the M8T observations, so the opportunity for improvement is smaller than it is for the ComNav observations but there is still some opportunity even with the M8T.

3) A significant fraction of the improvement in the ability to maintain a fix on a moving rover between M8T and ComNav is likely not because of the additional L2 measurements, but simply because the overall quality of the more expensive receiver is higher.  The dual frequency measurements likely have a more significant advantage when it comes to faster first fixes and longer baselines.

The code is experimental only at this point and needs more work before it is ready for release but I do hope to make some form of it available eventually.