Comparison of the u-blox M8T to the u-blox M8P

I recently acquired a couple of  u-blox C94-M8P receivers and so I am now able to do a direct comparison between the u-blox M8T and M8P modules, something I’ve been wanting to do for a while.

I believe the hardware between the two receivers is identical, the differences are only in firmware.  The most significant difference in firmware is that the M8P includes an internal RTK solution.  In order to squeeze this into the firmware, however, they had to remove support for the Galileo and SBAS constellations, so this is another fairly significant difference.

Cost, of course, is also different.  The M8T receivers I usually use are available from CSGShop for $75.  CSG sells an M8P receiver for $240 or you can buy a kit with two C94-M8P eval boards from u-blox for $400.  The C94-M8P boards also include on-board radios.

In this post, I will describe how I used RTKLIB and a cell phone hot spot to connect the M8Ps rather than using the internal radios.  I will also compare the RTKLIB solutions for a pair of M8T receivers with the internal u-blox RTK solution for a pair of M8P receivers. I hope to compare the RTKLIB solution to the internal solution for a pair of M8Ps in a future post.  If you are mostly interested in the M8T to M8P comparison results, you can jump directly to the end of this post.

To set up the experiment I first connected both a C94-M8P receiver and a CSG M8T receiver to the GPS survey antenna on my roof using a signal splitter.  I then connected both receivers to a laptop with USB cables.  I configured the M8P using u-center following instructions in the C94-M8P Setup Guide with a few modifications.  First of all, I normally use an NTRIP caster over a cell phone hot spot for my real-time data link so didn’t want to bother with the on-board radios.  I disabled the radios following the instructions in the setup guide.  This involves removing a plastic cover, soldering a wire to a capacitor, drilling a hole in the plastic cover to run the wire, then plugging the other end into a pin on the connector.  It is also possible to do this by connecting an external voltage and ground to the correct pins on the connector, but a simple jumper option would have been a lot more convenient.

The setup instructions are intended to run the setup and solution output messages over the USB port while running the RTCM raw observation messages between receivers over the UART interface.  The UART interface is internally connected to the radios.  Since I am not using the radios, I wanted to run all communication over the USB interface to avoid extra cables.  To do this, I disabled the UART interface and configured the USB interface for UBX messages in and RTCM messages out using the UBX-CFG-PRT configure command from u-center.  Following the C94-M8P setup instructions, I then specified a fixed base station location with the UBX-CFG-TMODE3 command and enabled 1005,1077,1087, and 1230 RTCM messages which include the raw observations, base station location, and GLONASS biases.

I setup the base station M8T receiver as I usually do to output the raw observations using the UBX-RXM-RAWX messages.

I then started two copies of the RTKLIB STRSVR app on the base station laptop and streamed both sets of base observations to an NTRIP server as I’ve described in an earlier post using the free RTK2GO.com community NTRIP server.  With this setup, I can receive the NTRIP streams anywhere I can get cell phone coverage, using a cell phone hot spot and a laptop connected to the two rovers and I can test over much longer distances than the radios would allow.

One thing to be aware of is that there are separate versions of the receiver firmware for the M8P base and M8P rover.  I didn’t realize this at first.  It turned out that both of my receivers came loaded with the rover firmware so I spent an embarrassingly long time unsuccessfully trying to configure one of them as a base station.  Once I realized the problem, I was able to fairly quickly download the base firmware from the u-blox website, then use u-center to download the base station firmware to the receiver.

Next, I attached the two rover receivers to a laptop using USB cables and to a single antenna using a signal splitter.  To make the results more comparable to some of my other recent experiments I used the same GPS-500 L1/L2 antenna from SwiftNav that I used for those experiments.  This is a more expensive antenna than generally would be used with these receivers but I wanted to avoid mixing differences between antennas with differences between receivers.

The M8P rover does not need much configuration since it will by default process any incoming RTCM messages as inputs to an RTK solution.  I did configure the USB port  to support NEMA, UBX, and RTCM messages in and UBX and NMEA messages out.  For the static rover test, I left the position message output rate at the default 1 Hz, but increased it to 5 Hz for the dynamic rover test.  I suspect this only affects the message output rate, and not the solution itself since the solution is probably run at a faster internal rate.

The only other setting I modified in the M8P rover receiver was the dynamic model of the navigation mode using the UBX-CFG-NAV5 message.  I set it to “Pedestrian” for my static rover test and “Automotive” for my moving rover test.

I then started another copy of STRSVR on the rover laptop to stream the M8P base station data from the NTRIP server to the M8P rover over the USB COM port.

At this point, getting the solution output messages from the M8P rover is a little tricky.  Only a single user can attach to a Windows COM port at one time and the STRSVR app is not fully bi-directional.  One solution might be to use u-center to stream the NTRIP data and receive the rover messages but I did not verify if this is possible.

Instead, I used a feature of STRSVR that allows you to pipe the data coming from the other direction to a TCP/IP port which can then be processed by either another copy of STRSVR, or plotted with RTKPLOT, or used as an input to RTKNAVI, or all three at once.  Below, on the left, I show how I modified the STRSVR setup that streams the base data from the NTRIP client to the rover by checking the “Output Received Stream to TCP Port” and selecting port 1000.  On the right, I show the STRSVR setup necessary to stream this incoming data from the rover to a file.  This is a convenient work-around for the problem of only being able to connect one user at a time to a COM port.

strsvr2

In my case I configured the rover receiver to send both the M8P internal solution position using NMEA messages and the raw observations using UBX messages so that I could post-process the raw observations later with RTKLIB.  In general though, it is only necessary to send the NMEA messages.  I configured the rover to send GGA and RMC NMEA messages, but others will work as well.

For the M8Ts, I ran a real-time RTKLIB solution using RTKNAVI and my normal kinematic solution configuration for both static and moving rover tests.  As usual, I used the demo5 version of RTKLIB for this test.

So, how did they compare?  First of all, a few differences to consider between the solutions.   The M8P solution only uses GPS and GLONASS satellites since the SBAS and Galileo satellites have been disabled in the M8P firmware as I mentioned earlier.  This should give the M8T solution an advantage.  The M8P also has limited processing power relative to the laptop running the RTKLIB solution, so this may also give the M8T solution an advantage.  On the other hand, I’m sure that u-blox has lots of smart people that know all the internal details of the hardware which should give the M8P solution an advantage.

For my first comparison, I did a a static rover test with the antenna located on a tripod a few meters from the house and partially obstructed by trees.  To avoid different starting conditions between the two receivers, I connected both receivers to the antenna and waited till they both converged to fixed solutions before starting the test.  I then disconnected the antenna for about 30 seconds, and then reconnected it.  This will cause both receivers to lose lock with all satellites and the solutions will have to re-converge from scratch so this should be a fair comparison.  I disconnected and reconnected the antenna several times to compare the times to re-acquire fix and the accuracies of the fixes.

The results are plotted below, the M8P internal solution on the top, and the real-time RTKLIB M8T solution on the bottom.  I re-connected the antenna five times.  Once, the internal M8P solution re-acquired fix slightly quicker than the RTKLIB M8T solution, once the M8T re-acquired fix slightly quicker than the M8P.  The other three times, the M8T re-acquired fix significantly quicker than the M8P.   The M8P times to re-acquire varied from 52 to 220 seconds.  The M8T took from 50 to 76 seconds.

The M8T RTKLIB solutions had no false fixes, the M8P internal solutions had false fixes with several meter errors for over five seconds in two of the five tests.  The false fixes are circled with red in the plot below.

m8p1

The E-W and N-S axes matched within one centimeter between the two solutions.  The U-D axis at first appears to differ by 21.5 meters between the two solutions but this is because I specified the RTKLIB vertical solution to be relative to the ellipsoid, while the M8P solution is relative to the geoid.  If I subtract the geoid to ellipsoid offset (available in the GGA message from the M8P), then the two solutions match in the vertical axis as well.

Overall, I would have to say that in this particular run,  the RTKLIB M8T solution out-performed the M8P internal solution by a fairly significant amount.

One other observation about the M8P solution is that the precision reported (at least in the GGA message) is only to one centimeter in the horizontal axes and 10 cm in the vertical accuracy.  I didn’t see any obvious way to get outputs with better precision than this, at least with the NMEA messages.  [Note 8/16/18: It turns out that the UBX-CFG-NMEA message can be used to enable high precision mode to fix this.  Thanks to Charles Wang for pointing this out in a comment]

Next I mounted the same antenna on the roof of my car and drove around the neighborhood.  The sky view varied from nearly unobstructed to moderate tree canopy.  Here are the results of that test, the MP internal RTK solution on the left, and the M8T real-time RTKLIB solution on the right.

m8p2

The fix rate for the M8P solution was 52.4%.  The M8T solution was noticeably better with a 95.2% fix rate.

Here’s a plot of the difference between the two solutions (green indicates both solution are fixed, yellow indicates that at least one is float).  The U-D axis differences are fairly large, mostly because of the 10 cm vertical precision of the M8P position messages.  There is also a fairly large discrepancy between the two solutions around 22:55:00.  This corresponds to a 15 second gap in base station data, probably due to a loss of cell phone reception.  This seems to be a large error for such a short base outage but I was not able to narrow down the cause beyond this or determine which receiver contributed more to this combined error.  This probably deserves more investigation in a future post.

m8p2d

Again, the real-time RTKLIB M8T solution appeared in this test to be better than the internal M8P solution.

The most likely reason the M8T is getting a better solution is that it is using more satellites.  Here are the rover raw observations for the M8P on the left, and the M8T on the right.  With the GPS and GLONASS constellations, both receivers are seeing 13 satellites.  With Galileo and SBAS, the M8T is seeing an additional 7 satellites.  One of these Galileo satellites is not fully operational yet and so is not used in the solution, but that still gives 6 extra satellites, nearly 50% more than the M8P solution.

m8p3

If the extra satellites are what is improving the M8T solution, then running a solution with the M8T data without the SBAS and Galileo satellites should give a solution similar to the M8P solution.

This is easy to do, so let’s try it.  Here’s a post-processed solution for the M8T data using only the GPS and GLONASS satellites.

m8p4

Fix rate is 66.5%, reasonably close to the 52.4% fix rate from the M8P internal solution.  This suggests that a large part of the improvement for the M8T solution is coming from the additional satellites and not from any differences in the solution algorithms.

As in all my comparisons, this is intended to be one user’s initial experience, and not any kind of rigorous comparison test.  Please interpret these results with that in mind.  If you have experiences that differ from these I would be very interested to hear about them.

Advertisements

Glonass Ambiguity Resolution with RTKLIB Revisited

To get a high precision fixed solution in RTKLIB we need to resolve the integer ambiguities that come from the carrier phase measurements.  Resolving the integer ambiguities for the GLONASS satellites is more challenging than resolving them for the other constellations.  This is because, unlike the other constellations, the GLONASS satellites all transmit on slightly different frequencies.  This introduces an additional bias error in the receiver hardware.

These hardware biases are constant, generally the same for all receivers from the same manufacturer, are proportional to carrier frequency and are similar between L1 and L2.

In the demo5 version of RTKLIB, there are four choices for how to handle GLONASS ambiguity resolution (AR). I will cover all four briefly, but then focus on the “autocal” setting which I have enhanced in the most recent version (b29c) of the demo5 code.

Off:  If Glonass  AR is set to “Off”, then the raw measurements from the Glonass satellites will be used for the float solution but ambiguity resolution will be done only with satellites from the other constellations.  If you are not using the demo5 version of RTKLIB, this is usually your only choice when using receivers from different manufacturers for the rover and the base.  However, you are giving up a significant amount of information by ignoring the GLONASS ambiguities and so I would not recommend this setting if you are using the demo5 code, unless of course your receivers don’t support the Glonass satellites.

On:  If Glonass AR is set to “On” then RTKLIB will treat the Glonass ambiguities the same as the ambiguities from the other constellations and will not make any attempt to account for the additional hardware biases.  Use this setting if your base and rover receivers are from the same manufacturer, since in this case, the biases will cancel and can be ignored.  There are also some cases in which different manufacturers have equal or nearly equal biases as we will see later, in which case you can also use this setting.  This is your best solution for dealing with Glonass ambiguities.  I always try to use matched receivers for base and rover if possible.

Fix-and-Hold: This is an option I have added to the demo5 code for Glonass AR.  It is an extension to the “fix-and-hold” method used for other constellations but instead of using the additional feedback to track the ambiguities, it uses it to null out the hardware biases.  I recommend this setting when using the demo5 code with unmatched receivers.  It takes advantage of the additional information in the Glonass ambiguites most of the time.  However, fix-and-hold is not enabled until after a first fix has been achieved, and so the Glonass ambiguities are not available until then.  This can mean longer time to first fix and less robustness compared to the “On” option, so don’t use this option for matched receivers.

Auto-cal:  This option adds additional states to the kalman filter to estimate the receiver hardware biases as a function of carrier frequency, one state for L1, another for L2.  In previous experiments I had not had any success with it.  Recently, however, I discovered that if I adjusted the filter settings, it can be effective for a zero baseline case, where base and rover are both connected to the same antenna so that almost all other errors are completely cancelled.  With a little more experimentation I also found that for short baselines it can also be effective if the kalman filter state is pre-set to something close to the final value before the solution is started.  It will then usually converge to the correct bias value.  However, there is currently no mechanism in the code to adjust any of these values, so I have not found this mode to be useful in its current implementation.

To make the auto-cal option more flexible, and hopefully more useful, I made a few modifications to it in the b29c code.  I added the capability to pre-set the initial state value and also to adjust the internal filter settings, specifically the initial variance and process noise for this state.  The units for the state, and hence for the initial value are in meters per frequency channel and values generally are within +/-5 cm per channel.  I used some existing config parameters that are currently unused to reduce the amount of code I needed to change.  Unfortunately it means that the names are not as descriptive as they could be.  The new config parameters are:

pos2-arthres2 = relative GLONASS hardware bias in meters per frequency slot
pos2-arthres3 = initial variance of GLONASS hardware bias states
pos-arthres4 = process noise for GLONASS hardware bias states

Bias values have been published for some of the most popular geodetic quality receivers but are generally not available for lower-cost or less popular receivers.  Here is a table of values from a paper published by Lambert Wanninger in 2011 for nine receiver manufacturers.

biases

I was able to verify these results for Trimble, Leica, and Novatel, but I found a much lower value for Septentrio so I suspect the biases may have changed in their newer receivers.

To demonstrate the modified autocal option, I will start with a zero baseline case between a ComNav receiver and a Tersus receiver.  It is easiest to measure the hardware biases in the zero baseline case because most other errors will cancel and the hardware biases will be the dominant error.  In this case, I have significantly reduced the initial variance setting from the original value of 1.0 to 1E-7 and increased the process noise from 1E-6 to 1E-3.

I have run the solution several times with the initial bias value set to different numbers between -.05 and 0.06.  Here are the results for both L1 and L2.

biases1

The convergence occurs just after first fix is achieved.  If a fix is not achieved, then the state will not converge as you can see above for the 0.06 example.   In this case, the initial value was too far from the correct value and prevented getting a fix.  As you can see, all the other cases converged towards a single value around -.022, both for L1 and for L2.

Another way to visualize the error in the initial value is to look at the GLONASS residuals after first fix is achieved.  The plot below shows the GLONASS L1 carrier phase residuals  for different initial values, for 0.03 on the left, -0.05 in the middle, and what I believe is the correct value for this receiver combination of -.022 on the right.

 

acal1

Here are the same plots for the L2 carrier phase residuals.

acal2

Through a slightly tedious process, I am fairly easily able to iterate the residuals down to near zero for different pairs of receivers in my possession.  Note that this gives me the relative difference in biases for each receiver pair, and not absolute values for each receiver, unlike Wanniger’s table which is for absolute biases.

Extending the table to receivers used in nearby CORS stations is a little more challenging because the initial bias value needs to be fairly close to get a first fix and hence a convergence, but still possible if the base station is not too distant.   I found data sets that included CORS data from Leica, Novatel, Trimble, and Septentrio receivers.  Using the above procedure to iterate the residuals down to near zero, I was then able to extend my table and make the values absolute by choosing the unknown offset to make my bias pairs align with Wanninger’s table.  This is the resulting table I created.

ComNav    =   2.3 cm
Leica          =   2.3 cm
Novatel      =  2.3 cm
Septentrio = -0.3 cm
SwiftNav   = -0.2 cm
Tersus        = -0.1 cm
Trimble      = -0.7 cm
u-blox         = -3.2 cm 

To generate an initial value for the bias state from this table for an RTKLIB solution, subtract the base station bias from the rover bias, then divide by 100 to convert from centimeters to meters.  This value can then be used to set the “pos2-arthres2” config parameter in the config file.  For the RTKPOST and RTKNAVI GUI option menus I have labeled this “Glo HW Bias”.

To test this code on an independent set of data after generating the table, I used a data set recently sent to me by a reader.  It consists of a u-blox  M8T receiver for rover and Leica receiver just a few kilometers away for base, and was collected in Europe.  The rover position was static but I ran the solution in kinematic mode to make the solution a little more challenging and to make any errors in the solution more visible.

To generate the correct config value for RTKLIB I  subtracted the Leica bias of 2.3 cm from the above table from the u-blox bias of -3.2 cm to get a relative bias between receivers of -5.5 cm or -0.055 m.  I added “pos2-arthres2=-0.055” to the config file and then ran the solution four times, with pos2-gloarmode set to “off”,”fix-and-hold”,”autocal”, and “on”.  Although I left the bias value set for all runs it is ignored unless gloarmode is set to autocal.

Here are the times to first fix, the number of satellite pairs used for the initial fix, and the number of satellite pairs being used for fix after 10 minutes.

  Time to # sat pairs used # sat pairs used for
GLO AR mode first fix for initial fix fix after 10 min
OFF 4:10 7 7
Fix&Hold 4:10 7 11
Autocal 1:05 14 14
On 6:47 14 14

As you would expect, the time to first fix for gloarmode=”off” was the same as “fix-and-hold” since “fix-and-hold” does not use the GLONASS satellites for initial fix.  After 10 minutes it was still only including four of the GLONASS satellites in the ambiguity resolution which was a little unusual, typically I would have expected more GLONASS satellites to be included.

With gloarmode=”autocal”, the time to first fix was reduced from 250 seconds to 65 seconds and the number of satellites included in the first fix increased from 7 to 14., both significant improvements.

The most surprising thing in this data is that when gloarmode was set to “on” it acquired a fix at all.  In many similar cases it will never get a fix.  The GLONASS carrier phase residuals after initial fix were very high though as can be seen below.  The left plot is with gloarmode set to “on”, and the right plot is with it set to “autocal”.

biases3

The ambiguity resolution ratio was also much higher when autocal was enabled as can be seen below (yellow/green=autocal, olive/blue=on) which improves robustness.

biases2

The large residuals did not affect the solution position, as the two solution did not differ by more than 2 mm at any time.  The autocal solution however is much more robust in the sense that it is less likely to lose fix.

Although I have found the results with autocal enabled are generally excellent with relatively short baselines (<10 km), I have found the results less encouraging for longer baselines (>25 km).  In these case I have found that I often get better results with pos-gloarmode set to “fix-and-hold” then I do with “autocal”.  I don’t understand exactly why this is, but suspect that the fix-and-hold correction is more general and may be correcting for more than just the GLONASS hardware biases.

The code changes for this feature are included both in my Github repository and in the newest (demo5 b29c) executables available to download from the rtkexplorer website.   If you choose to experiment with this feature, please let me know if you find any errors in my table, or can add values for any additional receivers.

[Note 6/17/18:  I had a issue with uploading the executables to the website.   If you downloaded them prior to 6/17/18, please download again to get the updated version.] 

Swiftnav experiment: Improvements to the SNR

In my previous couple of posts, I evaluated the performance of a pair of dual freqeuncy SwiftNav Piksi multi receivers in a moving rover with local base scenario.  I used a pair of single frequency u-blox M8T receivers fed with the same antenna signals as a baseline reference.

It was pointed out to me that the signal to noise ratio (SNR) measurements of the rovers were noticeably lower than the bases, especially the L2 measurements and that this might be affecting the validity of the comparison.  This seemed to be a valid concern so I spent some time digging into this discrepancy and did indeed find some issues.  I will describe the issues as well as the process of tracking them down since I think it could be a useful exercise for any RTK/PPK user to potentially improve their signal quality.

Previously , in another post, I described a somewhat similar exercise tracking down some signal quality issues caused by EMI from the motor controllers on a drone.  In that case, though, the degradation was more severe and I was able to track it down by monitoring cycle slips.  In this case, the degradation is more subtle and does not directly show up in the cycle slips.

Every raw observation from the receiver generally includes a signal strength measurement as well as pseudorange and carrier phase measurements.  The SwiftNav and u-blox receivers both actually report carrier to noise density ratio (C/NO), rather than signal to noise ratio (SNR) but both are measures of signal strength.  They are labelled as SNR in the RTKLIB output, so to avoid confusion I will refer to them as SNR as well.  I will only be using them to compare relative values so the difference isn’t important for this exercise, but for anyone interested, there is a good explanation of the difference between them here.  Both are logarithmic values measured in dB or dB-Hz so 6 dB represents a factor of two in signal strength.

Since the base and rover have very similar configurations we would expect similar SNR numbers between the two, at least when the rover antenna is not obstructed by trees or other objects.  I selected an interval of a few minutes when the rover was on the open highway and plotted SNR by receiver and frequency for base and rover.  Here are the results, base on the left and rover on the right.  The Swift L1 is on the top, L2 in the middle, and the u-blox L1 on the bottom.  To avoid too much clutter on the plots, I show only the GLONASS SNR values, but the other constellations look similar.

snr1

Notice that the L1 SNR for both rovers is at least 6 dB (a factor of 2) lower than the base, and the Swift L2 SNR is more like 10 dB lower.  These are significant enough losses in the rover to possibly affect the quality of the measurement.

The next step was to try and isolate where the losses were coming from.  I set up the receiver configurations as before and used the “Obs Data” selection in the “RTK Monitor” window in RTKNAVI to monitor the SNR values in real time for both base and rover as well as the C/NO tracking window in the Swift console app.  I then started changing the configuration to see if the SNR values changed.  The base and rover antennas were similar but not identical so I first swapped out the rover antenna but this did not make a difference.  I then moved the rover antenna off of the car roof and onto a nearby tripod to see if the large ground plane (car roof) was affecting the antenna but this also did not make a difference.  I then removed the antenna splitter, but again no change.

Next, I started modifying the cable configuration between the receivers and my laptop.  To conveniently be able to both collect solution data and be able to collect and run a real-time solution on the raw Swift observations, I have been connecting both a USB serial cable and an ethernet cable between the Swift board and my laptop.  My laptop is an ultra-slim model and uses an etherent->USB adapter cable to avoid the need for a high profile ethernet connector.  So, with two receivers and my wireless mouse, I end up having more USB cables than USB ports on my computer and had to plug some into a USB hub that was then plugged into my laptop.

The first change in SNR occured when I unplugged the ethernet cable from the laptop and plugged it into the USB hub.  This didn’t affect the L1 measurements much but caused the Swift L2 SNR to drop another 10 dB!  Wrong direction, but at least I had a clue here.

By moving all of the data streams between Swift receiver and laptop (base data to Swift, raw data to laptop, internal solution to laptop) over to the ethernet connection I was able to eliminate one USB serial port cable.  This was enough to eliminate the USB hub entirely and plug both the USB serial cable from the u-blox receiver and the ethernet->USB cable from the Swift receiver directly into the laptop.  I also plugged the two cables into opposite sides of the laptop and wrapped the ethernet->USB adapter with aluminum foil which may have improved things slightly more.

Here is the same plot as above after the changes to the cabling from a drive around the neighborhood.

snr2

I wasn’t able to eliminate the differences entirely, but the results are much closer now.  The biggest difference now between the base configuration and the rover configuration is that I am using a USB serial cable for the base, and a ethernet->USB adapter cable for the rover so I suspect that cable is still generating some interference and that is causing the remaining signal loss in the rover.  Unfortunately I can not run all three streams I need for this experiment over the serial cable, so I am not able to get rid of the ethernet cable.

I did two driving tests with the new configuration, similar to the ones I described in the previous posts.   One was through the city of Boulder and again included going underneath underpasses and a parking garage.  The second run was through the older and more challenging residential neighborhood.  Both runs looked pretty good, a little better than the previous runs but it is not really fair to compare run to run since the satellite geometry and atmospheric conditions will be different between runs.  The relative solutions between Swift and u-blox didn’t change much though, which is probably expected since the cable changes improved both rovers by fairly similar amounts.

Here’s a quick summary of the fix rates for the two runs.  The fix rates for the residential neighborhood look a little low relative to last time but in this run I only included the most difficult neighborhood so it was a more challenging run than last time.

Fix rates

City/highway Residential
Swift internal RTK 93.60% 67.50%
Swift RTKLIB PPK 93.70% 87.90%
U-blox RTKLIB RTK 95.70% 92.80%
U-blox RTKLIB PPK 96.10% 91.10%

Here are the city/highway runs,  real-time on the top, post-process on the bottom with Swift on the left and u-blox on the right.  For the most part all solutions had near 100% fix except when recovering from going underneath the overpasses and parking garage.

snr4

Here are the same sequence of solutions for the older residential neighborhood.  This was more challenging than the city driving because of the overhanging trees and caused some amount of loss of fix in all solutions.

snr5

Here’s the same images of the recovery after driving under an underpass and underneath a parking garage that I showed in the previous post.  Again, the relative differences between Swift and u-blox didn’t change much, although the Swift may have improved a little.

snr1

Overall, the improvements from better SNR were incremental rather than dramatic, but still important for maximizing the robustness of the solutions.  This exercise of comparing base SNR to rover SNR and tracking down any discrepancies could be a useful exercise for anyone trying to improve their RTK or PPK results.

Underpasses and urban canyons

[Update: 4/17/18:  Although I don’t think it changes the results of this experiment significantly, there was an issue with apparent interference from a USB hub and ethernet cable on the rover setup during this testing.  See the next post for more details. ]

In my last post I demonstrated fairly similar fix rates and accuracies between an M8T single-frequency  four-constellation solution and a SwiftNav Piksi dual-frequency two-constellation solution.

One advantage often mentioned for dual frequency solutions for moving rovers is that their faster acquisition times should help when fix is lost due to a complete outage of satellite view caused by an underpass or other obstruction.  This makes sense since the dual frequency measurements should allow the ambiguities to be resolved again more quickly.

Since my last data set included several of these types of obstructions I thought it would be interesting to compare performance specifically for these cases.

To create the Google Earth images below I used the RTKLIB application POS2KML to translate the solution files to KML format files and then opened them with Google Earth.

Here are the raw observations for the first underpass I went under, Swift rover on the left, M8T rover on the right.  In this case there was an overhead sign just before the underpass which caused a momentary outage on all satellites followed by about a two second outage from the underpass, followed by a period of half cycle ambiguity as the receivers re-locked to the carrier phases.

upass2

Here’s the internal Swift solution for the sign/underpass combo above at the top of the photo and a second underpass at the bottom of the photo.  For the first underpass, the solution is lost at the sign, achieves a float solution (yellow) after about 9 seconds, then re-fixes (green) after 35 seconds.

upass5

Here’s the RTKLIB post-processed solution (forward only) for the Swift receivers with fix-and-hold low tracking gain enabled as described in my previous post.  It looks like a small improvement for both underpasses.  The solution loses fix at the sign but in this case maintains a float solution until the underpass.

upass6

Here’s the RTKLIB post-processed solution (same config) for the M8T receivers.  Notice the no-solution gaps after the underpasses are shorter.  In this case, for the upper underpass, a solid fix was re-achieved after about 21 sec.

upass7

Here’s a zoom in of the M8T solution (yellow dots) for the lower underpass.  If the position were being used for lane management it looks like the float solution would probably be accurate enough for this.  The other yellow line with no dots is the gap in the Swift solution.

upass8

Here’s a little further down the road.  At this point the Swift solution achieves a float position at about the same time the M8T solution switches to fix.  Lane management would clearly be more difficult with the initial Swift float solution.

upass9

Next, I’ll show a few images from another underpass.  In this case I drove under the underpass from the left, turned around, then drove under the underpass again from the right.  The Swift internal solution is on the left, the Swift RTKLIB solution in the middle, and the M8T RTKLIB solution on the right.  Notice that the time to re-acquire a fix is fairly similar in all three cases.

upass1

Here is zoom in of the two Swift solutions, they are quite similar.

upass3

Here is a zoom-in of the M8T RTKLIB solution.  Again, the float solution is achieved very quickly, and appears to be accurate enough for lane management.

upass4

My last test case was a combination urban canyon and parking structure.  In the photo below, I drove off the main street to the back of the parking garage, underneath the pedestrian walkway, into the back corner, then underneath the back end of the garage and then back to the main street.  I would consider this a quite challenging case for any receiver.

ucanyon1

Here are the raw observations.

ucanyon0

Here are the three solutions, again the Swift internal is on the left, the Swift RTKLIB in the center, and the M8T RTKLIB on the right.

ucanyon1

 

Here is an image of the Swift internal solution.

ucanyon4

Here is an image of the Swift RTKLIB solution

ucanyon3

And here is an image of the M8T RTKLIB solution

ucanyon2

In this case, the M8T RTKLIB solution appears to be the best.

So, this experiment seems to show that a dual frequency solution will not always handle satellite outages better than single frequency solutions.  In this case, the extra Galileo and SBAS satellites in the M8T solution seem to have helped a fair bit, and the M8T solution is, at least to me, surprisingly good.

If anyone is interested in analyzing this data further, I have uploaded the raw data, real-time solutions, and config files for the post-processed solutions to the sample data sets on my website, available here.  I should mention that there is an unexplained outage in the Swift base station data near the end of the data set.  This could have been caused by many things, most of them unrelated to the Swift receiver, so all the analysis in both this post and the previous post were done only for the data before the outage.

 

 

 

 

 

 

 

 

 

 

Using RTKLIB to process ComNav receiver data

Previously I collected some data with a pair of ComNav K708 receivers and compared the internal real-time ComNav position solution with a real-time RTKLIB solution from a pair of u-blox M8T receivers.  The results showed, at least for the moving rover case, that the two were quite similar.  In this post I will post-process the ComNav raw data with RTKLIB and compare the results both to the internal ComNav solution and the M8T post-processed RTKLIB solution .  The ComNav receiver configured to sell for sub-$1000 is setup to receive only GPS and GLONASS L1 and L2 frequencies so this is how I ran this test.  The M8T will also receive Galileo and SBAS satellites so I included these in the M8T solutions.

RTKLIB is not able to directly process raw binary ComNav data, but I had configured the receivers to output the raw data in RTCM3, then converted this to Rinex using RTKCONV, so this was not a problem.

Before we can get a solution however, we need to deal with the issue that the ComNav GPS L2 observations are a mix of L2 and L2C as seen in the RINEX observation data below.

comnav4

RTKLIB handles multiple code types for the same frequency like this by choosing the code with higher priority as set in the code priority table in rtkcmn.c.  Unfortunately this choice is made for the full data set, not each individual observation, so the lower priority observations are thrown out even for epochs when the higher priority code observations are not present.  There is an option in the ComNav configuration to force all observations to L2 and this would have been one solution to the problem.  However, in theory, the L2C code should be more robust than the L2 code, so using L2C when we have the option as the ComNav default setup does is probably the right choice.

I was not able to find any simple fix for the RTKLIB code or configuration that would allow it to process both observation types so I simply edited the list of observation types in the RINEX file headers and changed “C2X L2X S2X” in the list to “C2W L2W S2W”.  With this change, all the observations are now described as L2 and none will be thrown out.  It would be a problem if we were mixing L2 observations from the rover with L2C observations from the base, but since both will be consistently L2 or L2C, the differences should cancel, and this should not cause any significant errors in the solution.

I looked at three data sets, all from a moving rover with a local base station.  The first data set is configured as described in my earlier post, with both the M8T and the ComNav rover receivers sharing the same antenna.  In the second two data sets, the rovers are connected to separate antennas. I ran with continuous ambiguity resolution set and GLONASS ambiguity resolution enabled for all solutions.  The only change I made from the default ComNav config settings was to change the “rtkquality” setting from “quick” to “normal” since I had trouble with false fixes when it was set to”quick”.

First of all, let’s compare the ComNav internal solution to the ComNav RTKLIB solution.  Here’s the internal solution on the left, and the RTKLIB solution on the right.  In this case RTKLIB had a little higher fix rate at 76.8% vs the internal solution at 68.8%.

comnav8

What I found more surprising was that the internal solution did not stay fixed for the circles I drove in the parking lot from  23:37 to  23:40.  Normally I rarely have trouble maintaining 100% fix for this part of the route since there are very few trees here and the sky views are almost 100%.

Here’s a comparison of the difference between the two solutions.

comnav9

I expect the difference to be near zero for all fixed sections and for the most part this is true.  There is an exception though between  23:32 and 23:33.   I don’t have an absolute reference to compare to so I can’t be absolutely certain which is correct and which is in error.  However I can compare to the M8T solution and in this case the ComNav RTKLIB solution nearly exactly matches the M8T solution so I am fairly certain that it is the internal ComNav solution that is wrong.  This error is fairly large and is most likely caused by a false fix.  Also, in general, the error in the U-D axis is fairly large in both cases relative to what I am used to, but is worse in the COMNAV internal solution.  Below is the difference between the RTKLIB u-blox and RTKLIB ComNav solutions on the left, and the difference between the RTKLIB u-blox and internal ComNav solutions on the right, both for the U-D axis only.

comnav10

Here is the M8T RTKLIB solution for comparison.  Fix rate was 81.8%, just slightly higher than the ComNav RTKLIB solution.

comnav12

One advantage of post-processing the data over a real-time solution is that solutions can be run in combined mode where the result is a combination of running the solution through the kalman filter forwards and backwards.  Here’s the combined mode results, ComNav on the left, and M8T on the right.

comnav11

In this case both solutions were near 100% fix.  ComNav does sell post-processing software that would probably also let you do the same thing but I do not have a copy of it and don’t know how much it cost.

The results for the second two data sets with separate antennas showed similar differences to this one so I’ll include the plots here but won’t go into any detailed analysis.  In both sets of plots, the left is the ComNav internal solution, the middle is the ComNav RTKLIB solution, and the right is the M8T RTKLIB solution.

comnav13

comnav14

The biggest difference in these two data sets is that the M8T results were slightly worse relative to the ComNav results than in the above example but this is most likely because the antennas were separate and I used a low-cost single feed L1 antenna instead of the higher performance dual feed L1 antenna I prefer to use in these comparisons.  There is a good description here of why the dual-feed antenna should give better results.

It is always dangerous to conclude too much from a single experiment but this data does support what I have found in my other single frequency to dual frequency comparisons.  For short baselines with a pair of matched receivers and moving rovers, an L1+L2 GPS/GLONASS solution tends to be similar in performance to a L1-only GPS/GLONASS/SBAS/Galileo solution.   I expect this would be less true for longer baselines and stationary receivers.

New 1.2 Swift firmware with a moving rover

In the last couple of posts I compared the u-blox M8T receiver and the Swift Piksi Multi receiver for a stationary rover and an external base station using the latest 1.2 firmware from Swift.  I did this for both RTK and PPP solutions.  In this post I will look at a moving rover case with a pair of Piksi receivers and a pair of M8T receivers.

The new Swift firmware supports raw observations and float solutions for the GLONASS satellites, but does not yet support ambiguity resolution for GLONASS.  In the previous experiments the lack of GLONASS ambiguity resolution did not affect the comparison since an external (non-matching reciever) base was used. This meant that the RTKLIB solutions were not able to use GLONASS ambiguity resolution either.

In this experiment I did use matching receivers for the base data, so the RTKLIB solutions do include GLONASS ambiguity resolution and so should provide an advantage to the Swift internal solutions.  As I did last time, I will compare the real-time Swift RTK solution with post-processed RTKLIB solutions for the Piksi pair and the M8T pair.  In this case the M8T receivers are from CSG Shop and are running the 3.01 firmware so they include Galileo observations in addition to GPS, GLONASS, and SBAS.

In my previous moving rover experiments I shared a base antenna for the two base receivers but used separate antennas for the two rover receivers.  In this case I decided to also share the rover antenna between the two rover receivers to enable some more direct comparisons.  The base antenna is the same Swift GPS-500 antenna mounted on my roof that I used for the rover in the previous experiment.  The rovers are using a Tallysman TW7872 antenna mounted on top of a car.  In both cases I use a capacitively coupled splitter to isolate the two receivers and allow only one DC supply voltage to reach the antenna.  As usual, the data was collected while driving around my local neighborhood.

Whenever connecting two receivers to one antenna, or even locating to antennas close to each other there is always the concern that they may interfere with each other.  So, as a baseline, I first did a run with separate antennas for the rovers.  I used my inexpensive u-blox antenna for the M8T since it has a very long cable allowing me to separate both the receivers and the antennas by over a meter.  Here are the position solutions for the baseline test.  The Swift internal solution is on the left, the RTKLIB post-processed solution in the middle, and the M8T post-processed solution on the right.  As always, green is fixed points and yellow is float.

swift_newfw10

The post-processed solutions were both run forward-only and with continuous ambiguity resolution including GLONASS.  The percent fix for the three solutions were 69% for the Swift internal solution, 33% for the Swift RTKLIB solution, and 75% for the M8T RTKLIB solution.  I will use percent fix as a very coarse measure of goodness for now, later I will look at the solutions in more detail.  In this case I consider the Swift internal solution and the M8T RTKLIB solution roughly equal.  Both achieved 100% fix for a number of circles in a parking lot with open skies (22:30 to 22:35), and both struggled with partial fixes on the rest of the route which had more obstructed sky views driving along residential streets.  The Swift RTKLIB solution was noticeably worse and had 0% fix in the parking lot, and lower percent fix everywhere else.

I then re-ran a similar route a short time later, but this time with the M8T rover sharing the dual frequency antenna with the Swift rover, produced the following results in the same order as above.  The percent fixes were 53% for the Swift internal solution, 79% for the Swift RTKLIB solution, and 62% for the M8T RTKLIB solution.

swift_newfw9

Again I would consider the Swift internal and the M8T RTKLIB solutions roughly equal.  A different parking lot with open skies (1:43 to 1:48), gave no trouble to either solution, but both had only partial fix for the rest of the route.  In this case the Swift RTKLIB solution was noticeably better than the other two solutions which is interesting since last time it was noticeably worse.  This has been my experience in general with the Swift RTKLIB solutions, that they tend to be quite inconsistent.  With the previous firmware I traced some of the reason for this to unreported cycle slips.  I suspect the same is probably going on here but haven’t looked close enough to fully verify that.  As far as any interference occurring between the antennas, if it exists, it is too small to detect with this simple comparison.

Here are the raw observations for the two rovers for the second run, M8T on the left, Swift on the right.  Green lines are dual frequency, yellow are single frequency and red ticks are cycle slips.  There are 19 measurements available for the M8T solution and 22 for the Swift solution.  All are used for ambiguity resolution in the RTKLIB solutions but only the 11 GPS measurements are used for ambiguity resolution in the internal Swift solution.  The relatively cycle-slip-free region from 1:43 to 1:48 is from the parking lot where the sky views are open.

swift_newfw11

Since the exact same signal was fed into both receivers the observations can be compared more directly than in previous experiments.  Here is a zoom into the time in the parking lot and shortly after.   If you look carefully, you will notice that there are definitely more cycle slips reported in the M8T observations than in the Swift data.

swift_newfw7

We can verify if those cycle slips are real or not by plotting the double differences of the bias states from the RTKLIB statistics output file.  I selected two satellites that had more cycle slips in the M8T data, G03 and R04, and plotted their double differences relative to the most cycle-free satellite in their constellation.  In the plots below, the red lines are the double differences from the M8T bias states and the blue lines are the double differences from the Swift bias states.  The red circles are cycle slips reported in the observations and the discrete jumps are actual cycle slips.  The green x’s are half cycle invalid flags and not relevant to this particular example.

swift_newfw8

 

Both plots show good correlation between the reported cycle slips and the true cycle slips as well as confirming that their are more slips in the M8T observations than in the Swift observations.  So, at least in this example, the quality of the Swift observations is noticeably better than the M8T observations, even though both came from the same antenna.  Of course this might be expected since the Swift receiver, although low cost by dual frequency standards,  is still significantly more expensive than the M8T receiver.

The solutions however, don’t reflect this difference in observation quality.  The Swift internal solution appears to be no better, and possibly even a little worse than the RTKLIB M8T solution.  This is probably because the M8T solutions include GLONASS ambiguity resolution and the Swift solutions do not.  Once Swift adds this feature to their firmware I would expect the Swift internal solutions to be better than the M8T solutions.

The variability for the results from the Swift RTKLIB solutions are harder to explain and I don’t have any good answers for this yet.  I still do see cases of unreported cycle slips causing problems even with the new firmware and suspect this is a part of it.  I hope to investigate this further.

Another thing I wanted to look at is how good are the float solutions.  In Swift’s description of the new firmware they state that:

“Swift also suggests that users utilize the estimated accuracy fields
in navigation outputs for an indication of solution quality rather than using the
transition to RTK “fixed” mode as an indicator of solution quality, as the new
and improved float solution performance can often fulfill precision navigation
requirements.”

So, let’s take a look at the float portion of the internal Swift solution.  I loaded the .csv output file from the Swift internal solution into matlab, then plotted histograms for the horizontal and vertical accuracies for the fix and float solution points.  Here’s what it looks like:

swift_newfw_12

Notice that the float accuracies are significantly larger than the fix accuracies but they are probably realistic unlike the RTKLIB float accuracies which I demonstrated to be at least a factor of two overly optimistic in a recent post.

Here is an example of the solution points from the above data set in which the RTKLIB solution re-converges more quickly than the Swift internal solution.  This allows us to evaluate the accuracy of the Swift float values.  The top plot is one component of the horizontal position, and the bottom plot is the vertical position. Yellow is the Swift float solution, Olive green is the RTKLIB float solution and blue is the RTKLIB fix solution.  From the beginning of the data to 22:47:14 both solutions are float.  The RTKLIB solution converges to a fix at 22:47:14 and the Swift internal solution converges to a fix at 22:47:37.  Since both solutions eventually converge to the same value I will assume the RTKLIB fix solution is correct.

swift_newfw_13 The errors in the float solutions appear to be similar between RTKLIB and Swift and consistent with the Swift estimates of accuracy.  So the Swift float solution does not appear to be any more accurate than the RTKLIB float solution but the Swift estimates of the accuracy do seem to be more realistic.  Three quarters of a meter vertical accuracy is better than a single point solution but I suspect it is too large to be useful for many applications.  Still, it is a noticeable improvement from the very noisy float values I saw in the internal Swift solution in my previous comparison.

So what does all this mean?  I would summarize by saying that the results for a pair of Swift receivers with the new firmware are noticeably better than the results in my previous comparison and so are definitely a step in the right direction.  At least in my particular example, though, it still only puts them roughly on par with the results from a pair of M8T receivers.  They certainly seem to have the potential to be better than the M8T’s and hopefully with further improvements from Swift and maybe improvements in RTKLIB as well, we will see this soon.

I did not include a single M8T or Swift receiver paired with an external CORS or similar base station example in this moving rover comparison but would expect the Swift receiver to outperform the M8T receiver in that case because  of a better overlap in satellite pairs.  Also keep in mind that the Swift receiver does have some definite advantages in static and long baseline experiments as I showed in my previous two posts, especially the ability to get accurate locations for the base station using PPP solutions.

As always I want to emphasize that these are only one users results in one particular configuration and other users experiences in other environments could be quite different.  I have uploaded the raw data and RTKLIB config files to the sample data section on the rtkexplorer.com website if anybody would like to explore further.

I am just getting my newly borrowed NavCom receivers up and running and have upgraded my Tersus receiver with their new firmware so I hope to have results for these other low-cost dual frequency receivers soon.