Glonass Ambiguity Resolution with RTKLIB Revisited

To get a high precision fixed solution in RTKLIB we need to resolve the integer ambiguities that come from the carrier phase measurements.  Resolving the integer ambiguities for the GLONASS satellites is more challenging than resolving them for the other constellations.  This is because, unlike the other constellations, the GLONASS satellites all transmit on slightly different frequencies.  This introduces an additional bias error in the receiver hardware.

These hardware biases are constant, generally the same for all receivers from the same manufacturer, are proportional to carrier frequency and are similar between L1 and L2.

In the demo5 version of RTKLIB, there are four choices for how to handle GLONASS ambiguity resolution (AR). I will cover all four briefly, but then focus on the “autocal” setting which I have enhanced in the most recent version (b29c) of the demo5 code.

Off:  If Glonass  AR is set to “Off”, then the raw measurements from the Glonass satellites will be used for the float solution but ambiguity resolution will be done only with satellites from the other constellations.  If you are not using the demo5 version of RTKLIB, this is usually your only choice when using receivers from different manufacturers for the rover and the base.  However, you are giving up a significant amount of information by ignoring the GLONASS ambiguities and so I would not recommend this setting if you are using the demo5 code, unless of course your receivers don’t support the Glonass satellites.

On:  If Glonass AR is set to “On” then RTKLIB will treat the Glonass ambiguities the same as the ambiguities from the other constellations and will not make any attempt to account for the additional hardware biases.  Use this setting if your base and rover receivers are from the same manufacturer, since in this case, the biases will cancel and can be ignored.  There are also some cases in which different manufacturers have equal or nearly equal biases as we will see later, in which case you can also use this setting.  This is your best solution for dealing with Glonass ambiguities.  I always try to use matched receivers for base and rover if possible.

Fix-and-Hold: This is an option I have added to the demo5 code for Glonass AR.  It is an extension to the “fix-and-hold” method used for other constellations but instead of using the additional feedback to track the ambiguities, it uses it to null out the hardware biases.  I recommend this setting when using the demo5 code with unmatched receivers.  It takes advantage of the additional information in the Glonass ambiguites most of the time.  However, fix-and-hold is not enabled until after a first fix has been achieved, and so the Glonass ambiguities are not available until then.  This can mean longer time to first fix and less robustness compared to the “On” option, so don’t use this option for matched receivers.

Auto-cal:  This option adds additional states to the kalman filter to estimate the receiver hardware biases as a function of carrier frequency, one state for L1, another for L2.  In previous experiments I had not had any success with it.  Recently, however, I discovered that if I adjusted the filter settings, it can be effective for a zero baseline case, where base and rover are both connected to the same antenna so that almost all other errors are completely cancelled.  With a little more experimentation I also found that for short baselines it can also be effective if the kalman filter state is pre-set to something close to the final value before the solution is started.  It will then usually converge to the correct bias value.  However, there is currently no mechanism in the code to adjust any of these values, so I have not found this mode to be useful in its current implementation.

To make the auto-cal option more flexible, and hopefully more useful, I made a few modifications to it in the b29c code.  I added the capability to pre-set the initial state value and also to adjust the internal filter settings, specifically the initial variance and process noise for this state.  The units for the state, and hence for the initial value are in meters per frequency channel and values generally are within +/-5 cm per channel.  I used some existing config parameters that are currently unused to reduce the amount of code I needed to change.  Unfortunately it means that the names are not as descriptive as they could be.  The new config parameters are:

pos2-arthres2 = relative GLONASS hardware bias in meters per frequency slot
pos2-arthres3 = initial variance of GLONASS hardware bias states
pos-arthres4 = process noise for GLONASS hardware bias states

Bias values have been published for some of the most popular geodetic quality receivers but are generally not available for lower-cost or less popular receivers.  Here is a table of values from a paper published by Lambert Wanninger in 2011 for nine receiver manufacturers.

biases

I was able to verify these results for Trimble, Leica, and Novatel, but I found a much lower value for Septentrio so I suspect the biases may have changed in their newer receivers.

To demonstrate the modified autocal option, I will start with a zero baseline case between a ComNav receiver and a Tersus receiver.  It is easiest to measure the hardware biases in the zero baseline case because most other errors will cancel and the hardware biases will be the dominant error.  In this case, I have significantly reduced the initial variance setting from the original value of 1.0 to 1E-7 and increased the process noise from 1E-6 to 1E-3.

I have run the solution several times with the initial bias value set to different numbers between -.05 and 0.06.  Here are the results for both L1 and L2.

biases1

The convergence occurs just after first fix is achieved.  If a fix is not achieved, then the state will not converge as you can see above for the 0.06 example.   In this case, the initial value was too far from the correct value and prevented getting a fix.  As you can see, all the other cases converged towards a single value around -.022, both for L1 and for L2.

Another way to visualize the error in the initial value is to look at the GLONASS residuals after first fix is achieved.  The plot below shows the GLONASS L1 carrier phase residuals  for different initial values, for 0.03 on the left, -0.05 in the middle, and what I believe is the correct value for this receiver combination of -.022 on the right.

 

acal1

Here are the same plots for the L2 carrier phase residuals.

acal2

Through a slightly tedious process, I am fairly easily able to iterate the residuals down to near zero for different pairs of receivers in my possession.  Note that this gives me the relative difference in biases for each receiver pair, and not absolute values for each receiver, unlike Wanniger’s table which is for absolute biases.

Extending the table to receivers used in nearby CORS stations is a little more challenging because the initial bias value needs to be fairly close to get a first fix and hence a convergence, but still possible if the base station is not too distant.   I found data sets that included CORS data from Leica, Novatel, Trimble, and Septentrio receivers.  Using the above procedure to iterate the residuals down to near zero, I was then able to extend my table and make the values absolute by choosing the unknown offset to make my bias pairs align with Wanninger’s table.  This is the resulting table I created.

ComNav    =   2.3 cm
Leica          =   2.3 cm
Novatel      =  2.3 cm
Septentrio = -0.3 cm
SwiftNav   = -0.2 cm
Tersus        = -0.1 cm
Trimble      = -0.7 cm
u-blox         = -3.2 cm 

To generate an initial value for the bias state from this table for an RTKLIB solution, subtract the base station bias from the rover bias, then divide by 100 to convert from centimeters to meters.  This value can then be used to set the “pos2-arthres2” config parameter in the config file.  For the RTKPOST and RTKNAVI GUI option menus I have labeled this “Glo HW Bias”.

To test this code on an independent set of data after generating the table, I used a data set recently sent to me by a reader.  It consists of a u-blox  M8T receiver for rover and Leica receiver just a few kilometers away for base, and was collected in Europe.  The rover position was static but I ran the solution in kinematic mode to make the solution a little more challenging and to make any errors in the solution more visible.

To generate the correct config value for RTKLIB I  subtracted the Leica bias of 2.3 cm from the above table from the u-blox bias of -3.2 cm to get a relative bias between receivers of -5.5 cm or -0.055 m.  I added “pos2-arthres2=-0.055” to the config file and then ran the solution four times, with pos2-gloarmode set to “off”,”fix-and-hold”,”autocal”, and “on”.  Although I left the bias value set for all runs it is ignored unless gloarmode is set to autocal.

Here are the times to first fix, the number of satellite pairs used for the initial fix, and the number of satellite pairs being used for fix after 10 minutes.

  Time to # sat pairs used # sat pairs used for
GLO AR mode first fix for initial fix fix after 10 min
OFF 4:10 7 7
Fix&Hold 4:10 7 11
Autocal 1:05 14 14
On 6:47 14 14

As you would expect, the time to first fix for gloarmode=”off” was the same as “fix-and-hold” since “fix-and-hold” does not use the GLONASS satellites for initial fix.  After 10 minutes it was still only including four of the GLONASS satellites in the ambiguity resolution which was a little unusual, typically I would have expected more GLONASS satellites to be included.

With gloarmode=”autocal”, the time to first fix was reduced from 250 seconds to 65 seconds and the number of satellites included in the first fix increased from 7 to 14., both significant improvements.

The most surprising thing in this data is that when gloarmode was set to “on” it acquired a fix at all.  In many similar cases it will never get a fix.  The GLONASS carrier phase residuals after initial fix were very high though as can be seen below.  The left plot is with gloarmode set to “on”, and the right plot is with it set to “autocal”.

biases3

The ambiguity resolution ratio was also much higher when autocal was enabled as can be seen below (yellow/green=autocal, olive/blue=on) which improves robustness.

biases2

The large residuals did not affect the solution position, as the two solution did not differ by more than 2 mm at any time.  The autocal solution however is much more robust in the sense that it is less likely to lose fix.

Although I have found the results with autocal enabled are generally excellent with relatively short baselines (<10 km), I have found the results less encouraging for longer baselines (>25 km).  In these case I have found that I often get better results with pos-gloarmode set to “fix-and-hold” then I do with “autocal”.  I don’t understand exactly why this is, but suspect that the fix-and-hold correction is more general and may be correcting for more than just the GLONASS hardware biases.

The code changes for this feature are included both in my Github repository and in the newest (demo5 b29c) executables available to download from the rtkexplorer website.   If you choose to experiment with this feature, please let me know if you find any errors in my table, or can add values for any additional receivers.

[Note 6/17/18:  I had a issue with uploading the executables to the website.   If you downloaded them prior to 6/17/18, please download again to get the updated version.] 

Advertisements

Using SSR corrections with RTKLIB for PPP solutions

If you have been following recent announcements in precision GNSS, you may have been hearing a lot about SSR (State State Representation).  SwiftNav recently announced their Skylark corrections service, and u-blox announced a partnership with Sapcorda to provide correction service for their upcoming F9 receivers.  Both of these services are based on SSR corrections.

So, what is SSR?  Very briefly, it refers to the form of the corrections.  In traditional RTK with physical base stations or virtual reference stations (VRS), the corrections come in the form of observations in which all of the different error source are lumped together as part of the observation.  This is referred to as OSR (Observation Space Representation).  In SSR corrections, the different error source (satellite clocks,  satellite orbits, satellite signal biases, ionospheric delay, and tropospheric delay) are modeled and distributed separately.  There are many advantages to this form but what seems to be driving industry towards it now is that it allows the current VRS model where each user requires a unique data stream with observations tailored for their location to be replaced with a single universal stream that can be used by all observers.  This is a requirement if the technology is going to be adopted for the mass-market automotive industry for self-driving cars, since it is not practical to provide every car with it’s own data stream.

For more detailed information on SSR, Geo++ has a one page summary here and IGS has an 18 minute video presentation on the topic here.  Both are excellent.

Below is an image I borrowed from the IGS presentation which shows the flexibility of the SSR format.  It is intended to show how the same SSR data stream can be used globally for PPP quality corrections and also regionally for RTK quality corrections but it is also a good visual for understanding the message details I describe below.

ssr1

The RTCM standards committee is still in the process of finalizing the messages used to send the different correction components.  They have split the work into three phases.  Phase 1 includes the satellite clock, orbit, and code biases.  Phase 2 includes satellite phase biases and vertical ionosphere corrections, and phase 3 includes ionospheric slant corrections and tropospheric corrections.

There are several real-time SSR streams accessible for free today.  Unlike the paid services, they do not contain enough detailed regional atmospheric corrections to use as a replacement for a VRS base but they can easily be used for static PPP solutions.

I used the CLK93 stream from CNES for an experiment to test how well RTKLIB handled the SSR corrections.  It includes the Phase 1 and Phase 2 RTCM messages but not the Phase 3 messages.  Here is the format of the the messages in the CLK93 data stream:

clk93

You can register for free access to the CLK93 (and other) streams from any of these locations:

Unfortunately, RTKLIB currently only supports the Phase 1 RTCM messages and even this is not complete in the release version.  I have gone through the code and made a few changes to make the Phase 1 SSR functional and have checked those changes into the demo5 Github repository.  I also added some code to handle the mixed L2 and L2C observations from the ComNav and Tersus receivers.  After a little more testing I plan to release this code into the demo5 executables, hopefully in the next week or two.

With only phase 1 measurements, the RTKLIB PPP solutions will work much better with dual frequency receivers than with single frequency receivers.  This is because single frequency receivers require ionospheric corrections for longer baselines.  For this reason, I did not bother with collecting any single frequency data.  Instead, I collected both L1/L2C data with a Swiftnav Piksi Multi receiver and L1/L2/L2C data with a ComNav K708 receiver and a Tersus BX306 receiver.

RTKLIB is usually used to calculate PPP solutions without SSR corrections but this requires downloading multiple correction files for orbital errors, clock errors, and code bias errors and it is usually done with post-processing rather than real-time, after the corrections are available.  With SSR, the process is simpler because the solution can be done real-time and there is no need to download any additional files.  It does, however, require access to the internet to receive the real-time SSR data stream from an NTRIP caster.  The solution can be calculated real-time or the SSR corrections and receiver observation streams can be recorded and the solution post-processed.

To enable the use of SSR corrections in RTKLIB, you need to set the “Satellite Ephemeris/Clock (pos1-sateph) input parameter to either “Broadcast+SSR APC” or “Broadcast+SSR CoM”.  Note that CoM stands for Center of Mass and APC for Antenna Phase Center.  They refer to the reference point for the corrections.  The CLK93 corrections are based on antenna phase centers.

To generate my PPP solution I set the solution mode to “PPP-Static”,  ephemeris/clock (pos1-sateph) to “brdc+ssrapc”, ionosphere correction (pos1-ionopt) to “dual-freq”, and troposphere correction (pos1-tropopt) to “est-ztd”.  I also enabled most of the other PPP options including  earth tides,  satellite PCVs, receiver PCVs, phase windup, and eclipse rejection.

RTKLIB PPP solutions don’t support ambiguity resolution so the ambiguity resolution settings are ignored.  I specified the satellite antenna file as “ngs14.atx” which is the standard antenna correction file and is available as part of the demo5 executable package.  I also needed to include the CLK93 data stream as one of the inputs in addition to the receiver observations (and navigation file if post-processing).

I collected a couple hundred hours of observations with the SwiftNav receiver connected to a ComNav AT-330 antenna on my roof with moderately good sky visibility.  I then ran many four hour static solutions over randomly selected data windows.  I also collected a small amount of raw data from a ComNav K708 receiver and a Tersus BX306 receiver.

Below is a typical 12 hour static solution for a SwiftNav and a ComNav receiver.  The SwiftNav solution is in green and the ComNav solution is in purple.  Zero in these plots represents an online PPP solution from CSRS from data collected over a month earlier.  In this particular example, the SwiftNav solution is slightly better but this was not always the case.

 

ssr2

Here is a shorter data set from a Tersus BX306 receiver.  With the relatively small amount of Tersus and ComNav data I collected, I did not notice any differences in convergence rates or final accuracy between any of the three receivers.

ssr3

The solutions generally all converged to less than 6 cm of error in each axis after 4 hours with one or two minor exceptions that seemed to involve small anomalies at the day boundary.  Since the RTKLIB PPP solutions don’t include ambiguity resolution they do take longer to converge but the eventual accuracy should be similar.

I’ve uploaded some of the raw observation data for the different receivers and the CLK93 data stream as well as the config file that I used for the solution here.

This seems like a good start and I hope that RTKLIB will support phase 2 and phase 3 corrections in the future.

Swiftnav experiment: Improvements to the SNR

In my previous couple of posts, I evaluated the performance of a pair of dual freqeuncy SwiftNav Piksi multi receivers in a moving rover with local base scenario.  I used a pair of single frequency u-blox M8T receivers fed with the same antenna signals as a baseline reference.

It was pointed out to me that the signal to noise ratio (SNR) measurements of the rovers were noticeably lower than the bases, especially the L2 measurements and that this might be affecting the validity of the comparison.  This seemed to be a valid concern so I spent some time digging into this discrepancy and did indeed find some issues.  I will describe the issues as well as the process of tracking them down since I think it could be a useful exercise for any RTK/PPK user to potentially improve their signal quality.

Previously , in another post, I described a somewhat similar exercise tracking down some signal quality issues caused by EMI from the motor controllers on a drone.  In that case, though, the degradation was more severe and I was able to track it down by monitoring cycle slips.  In this case, the degradation is more subtle and does not directly show up in the cycle slips.

Every raw observation from the receiver generally includes a signal strength measurement as well as pseudorange and carrier phase measurements.  The SwiftNav and u-blox receivers both actually report carrier to noise density ratio (C/NO), rather than signal to noise ratio (SNR) but both are measures of signal strength.  They are labelled as SNR in the RTKLIB output, so to avoid confusion I will refer to them as SNR as well.  I will only be using them to compare relative values so the difference isn’t important for this exercise, but for anyone interested, there is a good explanation of the difference between them here.  Both are logarithmic values measured in dB or dB-Hz so 6 dB represents a factor of two in signal strength.

Since the base and rover have very similar configurations we would expect similar SNR numbers between the two, at least when the rover antenna is not obstructed by trees or other objects.  I selected an interval of a few minutes when the rover was on the open highway and plotted SNR by receiver and frequency for base and rover.  Here are the results, base on the left and rover on the right.  The Swift L1 is on the top, L2 in the middle, and the u-blox L1 on the bottom.  To avoid too much clutter on the plots, I show only the GLONASS SNR values, but the other constellations look similar.

snr1

Notice that the L1 SNR for both rovers is at least 6 dB (a factor of 2) lower than the base, and the Swift L2 SNR is more like 10 dB lower.  These are significant enough losses in the rover to possibly affect the quality of the measurement.

The next step was to try and isolate where the losses were coming from.  I set up the receiver configurations as before and used the “Obs Data” selection in the “RTK Monitor” window in RTKNAVI to monitor the SNR values in real time for both base and rover as well as the C/NO tracking window in the Swift console app.  I then started changing the configuration to see if the SNR values changed.  The base and rover antennas were similar but not identical so I first swapped out the rover antenna but this did not make a difference.  I then moved the rover antenna off of the car roof and onto a nearby tripod to see if the large ground plane (car roof) was affecting the antenna but this also did not make a difference.  I then removed the antenna splitter, but again no change.

Next, I started modifying the cable configuration between the receivers and my laptop.  To conveniently be able to both collect solution data and be able to collect and run a real-time solution on the raw Swift observations, I have been connecting both a USB serial cable and an ethernet cable between the Swift board and my laptop.  My laptop is an ultra-slim model and uses an etherent->USB adapter cable to avoid the need for a high profile ethernet connector.  So, with two receivers and my wireless mouse, I end up having more USB cables than USB ports on my computer and had to plug some into a USB hub that was then plugged into my laptop.

The first change in SNR occured when I unplugged the ethernet cable from the laptop and plugged it into the USB hub.  This didn’t affect the L1 measurements much but caused the Swift L2 SNR to drop another 10 dB!  Wrong direction, but at least I had a clue here.

By moving all of the data streams between Swift receiver and laptop (base data to Swift, raw data to laptop, internal solution to laptop) over to the ethernet connection I was able to eliminate one USB serial port cable.  This was enough to eliminate the USB hub entirely and plug both the USB serial cable from the u-blox receiver and the ethernet->USB cable from the Swift receiver directly into the laptop.  I also plugged the two cables into opposite sides of the laptop and wrapped the ethernet->USB adapter with aluminum foil which may have improved things slightly more.

Here is the same plot as above after the changes to the cabling from a drive around the neighborhood.

snr2

I wasn’t able to eliminate the differences entirely, but the results are much closer now.  The biggest difference now between the base configuration and the rover configuration is that I am using a USB serial cable for the base, and a ethernet->USB adapter cable for the rover so I suspect that cable is still generating some interference and that is causing the remaining signal loss in the rover.  Unfortunately I can not run all three streams I need for this experiment over the serial cable, so I am not able to get rid of the ethernet cable.

I did two driving tests with the new configuration, similar to the ones I described in the previous posts.   One was through the city of Boulder and again included going underneath underpasses and a parking garage.  The second run was through the older and more challenging residential neighborhood.  Both runs looked pretty good, a little better than the previous runs but it is not really fair to compare run to run since the satellite geometry and atmospheric conditions will be different between runs.  The relative solutions between Swift and u-blox didn’t change much though, which is probably expected since the cable changes improved both rovers by fairly similar amounts.

Here’s a quick summary of the fix rates for the two runs.  The fix rates for the residential neighborhood look a little low relative to last time but in this run I only included the most difficult neighborhood so it was a more challenging run than last time.

Fix rates

City/highway Residential
Swift internal RTK 93.60% 67.50%
Swift RTKLIB PPK 93.70% 87.90%
U-blox RTKLIB RTK 95.70% 92.80%
U-blox RTKLIB PPK 96.10% 91.10%

Here are the city/highway runs,  real-time on the top, post-process on the bottom with Swift on the left and u-blox on the right.  For the most part all solutions had near 100% fix except when recovering from going underneath the overpasses and parking garage.

snr4

Here are the same sequence of solutions for the older residential neighborhood.  This was more challenging than the city driving because of the overhanging trees and caused some amount of loss of fix in all solutions.

snr5

Here’s the same images of the recovery after driving under an underpass and underneath a parking garage that I showed in the previous post.  Again, the relative differences between Swift and u-blox didn’t change much, although the Swift may have improved a little.

snr1

Overall, the improvements from better SNR were incremental rather than dramatic, but still important for maximizing the robustness of the solutions.  This exercise of comparing base SNR to rover SNR and tracking down any discrepancies could be a useful exercise for anyone trying to improve their RTK or PPK results.

Underpasses and urban canyons

[Update: 4/17/18:  Although I don’t think it changes the results of this experiment significantly, there was an issue with apparent interference from a USB hub and ethernet cable on the rover setup during this testing.  See the next post for more details. ]

In my last post I demonstrated fairly similar fix rates and accuracies between an M8T single-frequency  four-constellation solution and a SwiftNav Piksi dual-frequency two-constellation solution.

One advantage often mentioned for dual frequency solutions for moving rovers is that their faster acquisition times should help when fix is lost due to a complete outage of satellite view caused by an underpass or other obstruction.  This makes sense since the dual frequency measurements should allow the ambiguities to be resolved again more quickly.

Since my last data set included several of these types of obstructions I thought it would be interesting to compare performance specifically for these cases.

To create the Google Earth images below I used the RTKLIB application POS2KML to translate the solution files to KML format files and then opened them with Google Earth.

Here are the raw observations for the first underpass I went under, Swift rover on the left, M8T rover on the right.  In this case there was an overhead sign just before the underpass which caused a momentary outage on all satellites followed by about a two second outage from the underpass, followed by a period of half cycle ambiguity as the receivers re-locked to the carrier phases.

upass2

Here’s the internal Swift solution for the sign/underpass combo above at the top of the photo and a second underpass at the bottom of the photo.  For the first underpass, the solution is lost at the sign, achieves a float solution (yellow) after about 9 seconds, then re-fixes (green) after 35 seconds.

upass5

Here’s the RTKLIB post-processed solution (forward only) for the Swift receivers with fix-and-hold low tracking gain enabled as described in my previous post.  It looks like a small improvement for both underpasses.  The solution loses fix at the sign but in this case maintains a float solution until the underpass.

upass6

Here’s the RTKLIB post-processed solution (same config) for the M8T receivers.  Notice the no-solution gaps after the underpasses are shorter.  In this case, for the upper underpass, a solid fix was re-achieved after about 21 sec.

upass7

Here’s a zoom in of the M8T solution (yellow dots) for the lower underpass.  If the position were being used for lane management it looks like the float solution would probably be accurate enough for this.  The other yellow line with no dots is the gap in the Swift solution.

upass8

Here’s a little further down the road.  At this point the Swift solution achieves a float position at about the same time the M8T solution switches to fix.  Lane management would clearly be more difficult with the initial Swift float solution.

upass9

Next, I’ll show a few images from another underpass.  In this case I drove under the underpass from the left, turned around, then drove under the underpass again from the right.  The Swift internal solution is on the left, the Swift RTKLIB solution in the middle, and the M8T RTKLIB solution on the right.  Notice that the time to re-acquire a fix is fairly similar in all three cases.

upass1

Here is zoom in of the two Swift solutions, they are quite similar.

upass3

Here is a zoom-in of the M8T RTKLIB solution.  Again, the float solution is achieved very quickly, and appears to be accurate enough for lane management.

upass4

My last test case was a combination urban canyon and parking structure.  In the photo below, I drove off the main street to the back of the parking garage, underneath the pedestrian walkway, into the back corner, then underneath the back end of the garage and then back to the main street.  I would consider this a quite challenging case for any receiver.

ucanyon1

Here are the raw observations.

ucanyon0

Here are the three solutions, again the Swift internal is on the left, the Swift RTKLIB in the center, and the M8T RTKLIB on the right.

ucanyon1

 

Here is an image of the Swift internal solution.

ucanyon4

Here is an image of the Swift RTKLIB solution

ucanyon3

And here is an image of the M8T RTKLIB solution

ucanyon2

In this case, the M8T RTKLIB solution appears to be the best.

So, this experiment seems to show that a dual frequency solution will not always handle satellite outages better than single frequency solutions.  In this case, the extra Galileo and SBAS satellites in the M8T solution seem to have helped a fair bit, and the M8T solution is, at least to me, surprisingly good.

If anyone is interested in analyzing this data further, I have uploaded the raw data, real-time solutions, and config files for the post-processed solutions to the sample data sets on my website, available here.  I should mention that there is an unexplained outage in the Swift base station data near the end of the data set.  This could have been caused by many things, most of them unrelated to the Swift receiver, so all the analysis in both this post and the previous post were done only for the data before the outage.

 

 

 

 

 

 

 

 

 

 

Improved results with the new SwiftNav 1.4 firmware

I last took a look at the SwiftNav Piksi Multi low-cost dual-frequency receiver back in November last year when they introduced the 1.2 version of FW.  They are now up to a 1.4 version of firmware so I thought it was time to take another look.  The most significant improvement in this release is the addition of GLONASS ambiguity resolution to the internal RTK solutions but they also have made some improvements in the quality of the raw observations.

I started with a quick spin around the neighborhood on my usual test route.  The initial results looked quite good, so for the next test I expanded my route to include a drive to and around Boulder, Colorado, a small nearby city of just over 100,000.  The route included some new challenges including underpasses, urban canyons, higher velocities, and even a pass underneath a parking structure.  This is the first time I have expanded the driving test outside my local neighborhood.

My test configuration was similar to previous tests.  I used a ComNav  AT330  antenna on my house roof for the base station, and a SwiftNav GPS-500 antenna on top of my car for the rover. I split the antenna signals and in both cases, fed one side to a Piksi receiver and the other side to a  to a u-blox M8T single frequency receiver.   I ran an internal real-time RTK solution on the Piksi rover and an RTKNAVI RTK real-time solution on the M8T rover.  The M8T receivers ran a four constellation single frequency solution (GPS/GLONASS/Galileo/SBAS) to act as a baseline while the Piksi receivers ran a two constellation (GPS/GLONASS) dual frequency solution.  Both rovers were running at a 5 Hz sample rate and both bases were running at a 1 Hz sample rate.  The distance between rover and base varied from 0 to just over 13 km.  The photos below show different parts of the route.

niwot_boulder

Here are the real-time solutions for the two receiver pairs, internal Swift on the left, and RTKNAVI M8T on the right.

swift14_3

Both solutions had similar fix rates (79.9% for Swift, 82.6% for M8T) and in both cases the float sections occurred for the most part either in the older neighborhood with larger trees (top middle) of after underpasses (bottom left).  The higher velocity (100 km/hour) on the highway (center) did not cause any trouble for either solution.

Based on a comparison of the two solutions, accuracy was relatively good for the fix sections of both solutions.  Below on the left, is the difference between both solutions for points where both solutions had a fix.  In the center and right are plots of both solutions (Swift internal=green,M8T RTKNAVI=blue) for the two locations with the longest duration discrepancies of any magnitude.  Both look like false fixes by the Swift internal solution, based on the discontinuities.  Overall, though the errors between the two were reasonably small and of short duration.

swift14_2

Post-processing the Swift data with RTKLIB produced the solution on the left below with an 85.5% fix rate and a good match to the M8T solution.  The difference between both solutions for the fixed point is shown on the right.  This solution was run with continuous ambiguity resolution.

swift14_4

 

For more challenging environments like this I often add some tracking gain to the ambiguities by enabling “fix-and-hold” for the ambiguity resolution mode but setting the variance  of the feedback (input parameter pos2-varholdamb) to a fairly large number (0.1 or 1.0) to effectively de-weight the feedback and keep the tracking gain low.  For comparison, the default variance for fix-and-hold mode feedback is 0.001 which results in quite a high tracking gain.  I find that with the low tracking gain, I generally do not have an issue with fix-and-hold locking on to false fixes.

Running RTKLIB solutions for Swift and M8T with this change (fix-and-hold AR enabled, pos2-varholdamb=1.0) improved the fix ratio for the Swift RTKLIB solution from 85.5% to 91.1% and the M8T RTKLIB solution from 82.6% to 92.6% with no apparent degradation in accuracy.

Using a combined solution instead of a forward solution (only a choice for post-processing) improved the fix ratios even further, again with no apparent degradation in accuracy.  The Swift RKLIB solution increased to a 96.2% fix rate and the M8T RTKLIB solution increased to 94.1%.

Overall, the Swift RTKLIB solutions were noticeably better and more consistent than in my previous test.  Considering the difficulty of the environment, I consider all of these solutions to be very good.

In my next post, I will look specifically at how the two receivers handled going through a narrow urban canyon, underneath three underpasses and underneath a parking structure.

New 1.2 Swift firmware with a moving rover

In the last couple of posts I compared the u-blox M8T receiver and the Swift Piksi Multi receiver for a stationary rover and an external base station using the latest 1.2 firmware from Swift.  I did this for both RTK and PPP solutions.  In this post I will look at a moving rover case with a pair of Piksi receivers and a pair of M8T receivers.

The new Swift firmware supports raw observations and float solutions for the GLONASS satellites, but does not yet support ambiguity resolution for GLONASS.  In the previous experiments the lack of GLONASS ambiguity resolution did not affect the comparison since an external (non-matching reciever) base was used. This meant that the RTKLIB solutions were not able to use GLONASS ambiguity resolution either.

In this experiment I did use matching receivers for the base data, so the RTKLIB solutions do include GLONASS ambiguity resolution and so should provide an advantage to the Swift internal solutions.  As I did last time, I will compare the real-time Swift RTK solution with post-processed RTKLIB solutions for the Piksi pair and the M8T pair.  In this case the M8T receivers are from CSG Shop and are running the 3.01 firmware so they include Galileo observations in addition to GPS, GLONASS, and SBAS.

In my previous moving rover experiments I shared a base antenna for the two base receivers but used separate antennas for the two rover receivers.  In this case I decided to also share the rover antenna between the two rover receivers to enable some more direct comparisons.  The base antenna is the same Swift GPS-500 antenna mounted on my roof that I used for the rover in the previous experiment.  The rovers are using a Tallysman TW7872 antenna mounted on top of a car.  In both cases I use a capacitively coupled splitter to isolate the two receivers and allow only one DC supply voltage to reach the antenna.  As usual, the data was collected while driving around my local neighborhood.

Whenever connecting two receivers to one antenna, or even locating to antennas close to each other there is always the concern that they may interfere with each other.  So, as a baseline, I first did a run with separate antennas for the rovers.  I used my inexpensive u-blox antenna for the M8T since it has a very long cable allowing me to separate both the receivers and the antennas by over a meter.  Here are the position solutions for the baseline test.  The Swift internal solution is on the left, the RTKLIB post-processed solution in the middle, and the M8T post-processed solution on the right.  As always, green is fixed points and yellow is float.

swift_newfw10

The post-processed solutions were both run forward-only and with continuous ambiguity resolution including GLONASS.  The percent fix for the three solutions were 69% for the Swift internal solution, 33% for the Swift RTKLIB solution, and 75% for the M8T RTKLIB solution.  I will use percent fix as a very coarse measure of goodness for now, later I will look at the solutions in more detail.  In this case I consider the Swift internal solution and the M8T RTKLIB solution roughly equal.  Both achieved 100% fix for a number of circles in a parking lot with open skies (22:30 to 22:35), and both struggled with partial fixes on the rest of the route which had more obstructed sky views driving along residential streets.  The Swift RTKLIB solution was noticeably worse and had 0% fix in the parking lot, and lower percent fix everywhere else.

I then re-ran a similar route a short time later, but this time with the M8T rover sharing the dual frequency antenna with the Swift rover, produced the following results in the same order as above.  The percent fixes were 53% for the Swift internal solution, 79% for the Swift RTKLIB solution, and 62% for the M8T RTKLIB solution.

swift_newfw9

Again I would consider the Swift internal and the M8T RTKLIB solutions roughly equal.  A different parking lot with open skies (1:43 to 1:48), gave no trouble to either solution, but both had only partial fix for the rest of the route.  In this case the Swift RTKLIB solution was noticeably better than the other two solutions which is interesting since last time it was noticeably worse.  This has been my experience in general with the Swift RTKLIB solutions, that they tend to be quite inconsistent.  With the previous firmware I traced some of the reason for this to unreported cycle slips.  I suspect the same is probably going on here but haven’t looked close enough to fully verify that.  As far as any interference occurring between the antennas, if it exists, it is too small to detect with this simple comparison.

Here are the raw observations for the two rovers for the second run, M8T on the left, Swift on the right.  Green lines are dual frequency, yellow are single frequency and red ticks are cycle slips.  There are 19 measurements available for the M8T solution and 22 for the Swift solution.  All are used for ambiguity resolution in the RTKLIB solutions but only the 11 GPS measurements are used for ambiguity resolution in the internal Swift solution.  The relatively cycle-slip-free region from 1:43 to 1:48 is from the parking lot where the sky views are open.

swift_newfw11

Since the exact same signal was fed into both receivers the observations can be compared more directly than in previous experiments.  Here is a zoom into the time in the parking lot and shortly after.   If you look carefully, you will notice that there are definitely more cycle slips reported in the M8T observations than in the Swift data.

swift_newfw7

We can verify if those cycle slips are real or not by plotting the double differences of the bias states from the RTKLIB statistics output file.  I selected two satellites that had more cycle slips in the M8T data, G03 and R04, and plotted their double differences relative to the most cycle-free satellite in their constellation.  In the plots below, the red lines are the double differences from the M8T bias states and the blue lines are the double differences from the Swift bias states.  The red circles are cycle slips reported in the observations and the discrete jumps are actual cycle slips.  The green x’s are half cycle invalid flags and not relevant to this particular example.

swift_newfw8

 

Both plots show good correlation between the reported cycle slips and the true cycle slips as well as confirming that their are more slips in the M8T observations than in the Swift observations.  So, at least in this example, the quality of the Swift observations is noticeably better than the M8T observations, even though both came from the same antenna.  Of course this might be expected since the Swift receiver, although low cost by dual frequency standards,  is still significantly more expensive than the M8T receiver.

The solutions however, don’t reflect this difference in observation quality.  The Swift internal solution appears to be no better, and possibly even a little worse than the RTKLIB M8T solution.  This is probably because the M8T solutions include GLONASS ambiguity resolution and the Swift solutions do not.  Once Swift adds this feature to their firmware I would expect the Swift internal solutions to be better than the M8T solutions.

The variability for the results from the Swift RTKLIB solutions are harder to explain and I don’t have any good answers for this yet.  I still do see cases of unreported cycle slips causing problems even with the new firmware and suspect this is a part of it.  I hope to investigate this further.

Another thing I wanted to look at is how good are the float solutions.  In Swift’s description of the new firmware they state that:

“Swift also suggests that users utilize the estimated accuracy fields
in navigation outputs for an indication of solution quality rather than using the
transition to RTK “fixed” mode as an indicator of solution quality, as the new
and improved float solution performance can often fulfill precision navigation
requirements.”

So, let’s take a look at the float portion of the internal Swift solution.  I loaded the .csv output file from the Swift internal solution into matlab, then plotted histograms for the horizontal and vertical accuracies for the fix and float solution points.  Here’s what it looks like:

swift_newfw_12

Notice that the float accuracies are significantly larger than the fix accuracies but they are probably realistic unlike the RTKLIB float accuracies which I demonstrated to be at least a factor of two overly optimistic in a recent post.

Here is an example of the solution points from the above data set in which the RTKLIB solution re-converges more quickly than the Swift internal solution.  This allows us to evaluate the accuracy of the Swift float values.  The top plot is one component of the horizontal position, and the bottom plot is the vertical position. Yellow is the Swift float solution, Olive green is the RTKLIB float solution and blue is the RTKLIB fix solution.  From the beginning of the data to 22:47:14 both solutions are float.  The RTKLIB solution converges to a fix at 22:47:14 and the Swift internal solution converges to a fix at 22:47:37.  Since both solutions eventually converge to the same value I will assume the RTKLIB fix solution is correct.

swift_newfw_13 The errors in the float solutions appear to be similar between RTKLIB and Swift and consistent with the Swift estimates of accuracy.  So the Swift float solution does not appear to be any more accurate than the RTKLIB float solution but the Swift estimates of the accuracy do seem to be more realistic.  Three quarters of a meter vertical accuracy is better than a single point solution but I suspect it is too large to be useful for many applications.  Still, it is a noticeable improvement from the very noisy float values I saw in the internal Swift solution in my previous comparison.

So what does all this mean?  I would summarize by saying that the results for a pair of Swift receivers with the new firmware are noticeably better than the results in my previous comparison and so are definitely a step in the right direction.  At least in my particular example, though, it still only puts them roughly on par with the results from a pair of M8T receivers.  They certainly seem to have the potential to be better than the M8T’s and hopefully with further improvements from Swift and maybe improvements in RTKLIB as well, we will see this soon.

I did not include a single M8T or Swift receiver paired with an external CORS or similar base station example in this moving rover comparison but would expect the Swift receiver to outperform the M8T receiver in that case because  of a better overlap in satellite pairs.  Also keep in mind that the Swift receiver does have some definite advantages in static and long baseline experiments as I showed in my previous two posts, especially the ability to get accurate locations for the base station using PPP solutions.

As always I want to emphasize that these are only one users results in one particular configuration and other users experiences in other environments could be quite different.  I have uploaded the raw data and RTKLIB config files to the sample data section on the rtkexplorer.com website if anybody would like to explore further.

I am just getting my newly borrowed NavCom receivers up and running and have upgraded my Tersus receiver with their new firmware so I hope to have results for these other low-cost dual frequency receivers soon.

 

 

PPP solutions with the Swiftnav Piksi Multi

I have had a couple recent questions about the Swiftnav receiver and PPP solutions so before leaving the stationary rover for a moving rover, I thought I would take a quick look at this subject.

Unlike RTK or PPK which are differential solutions using two receivers, PPP (precise point positioning) is an absolute solution done with just a single receiver.  Since we don’t have the advantage of eliminating the errors through differencing, it is a more challenging problem and is usually (but not always) done with dual frequency receivers for stationary targets.  RTKLIB does support PPP solutions and we will look at them too, but in general I prefer using one of the free online services since it is easier to do this and the answers are more accurate.  PPP solutions require precise clock and ephemeris data which must be downloaded from the internet.  Since you need to be connected to the internet anyways to download these, I see very little advantage to trying to run your own solution with RTKLIB unless you are using it as a learning tool.

There are several different online services available and they all have their own advantages and disadvantages.  I will use the CSRS service, provided by the government of Canada, in this experiment for a few reasons.

First of all, unlike some of the other services, CSRS uses the GLONASS satellites in the PPP solution.  This is particularly relevant for the Swift receiver since it is L2C and only half of the GPS satellites include dual frequency measurements.  In this case, including the GLONASS satellites roughly triples the number of measurements.  CSRS will also process L2C data directly.  Some of the other services won’t work with L2C data unless you modify the header of the observation file.   If you do run into this problem trying to use another service, manually editing the Rinex file header and changing the observation type from C2 to P2 in the file header will usually work.

Another reason for using CSRS for this experiment is that it will solve for single frequency data sets as well as dual frequency.  The single frequency solutions are based on code observations only and significantly less accurate than the dual frequency solutions but they are available.   In this case I will take advantage of this to compare the PPP solution for both single frequency M8T data and dual frequency Swift data.

CSRS also has a very convenient data submission tool that can be downloaded to your computer.  With this tool installed and configured, you simply need to drag any observation file onto the tool icon and it will email you a solution a few minutes later.  It’s hard to get much simpler than that!  You do need to setup an account before accessing the service or downloading the tool but that is a relatively quick and easy process and only has to be done once.

One last feature that CSRS provides that many of the other services don’t is the option to process kinematic data sets as well as static but I have not tried this out yet.

For this experiment, I used eight hours of data collected from the Swiftnav GPS-500 antenna on my roof which was connected to both a Swiftnav receiver and a u-blox M8T receiver through a splitter.  The antenna is mounted on a one meter pole at the bottom edge of a low-angled roof so has a reasonably good, but certainly not ideal, sky view.

The accuracy of the PPP solution will depend on the accuracy of the ephemeris used.  This will vary based on how long it has been since the measurement data was collected.  Here is a list from their website of the three possibilities along with their wait times and accuracies for the CSRS solutions.

  • FINAL (+/- 2 cm): combined weekly and available 13 -15 days after the end of the week
  • RAPID (+/- 5 cm): available the next day
  • ULTRA RAPID (+/- 15 cm): available every 90 minutes

In my case, I had collected this data a few days earlier so CSRS was able to use the rapid precise ephemeris data for the solution.

I converted both raw binary files to Rinex format using the RTKCONV app in RTKLIB.  CSRS only accepts the older 2.11 format so I did need to specify this in the RTKCONV options.  Usually I use the newer 3.03 format since it is easier to read and to parse with Matlab.

Next I dragged the two files onto the CSRS tool icon and a few minutes later both solutions appeared in my email folder.  I had previously configured the tool with a few bits of information including my email address.  The results included a pdf summary file and a csv file with the epoch by epoch convergence of the solution.

Here is the solution from the csv file for the Swift receiver.  The results in the file are in LLH format where latitude and longitude are both in degrees.  I converted both from degrees to meters using the appropriate meters per degree constants for my particular location, then subtracted the final point to generate the plot below.    Note that this is a different coordinate system than I used in the plots in my last post in which I converted the LLH coordinates to earth-centric XYZ coordinates.  In XYZ coordinates, the Z coordinate is only equivalent to height if you made the measurement at the North or South pole.  In this case I have used the angle to meter conversion to preserve the separation between vertical and horizontal components to better show their relative accuracies.

ppp1

In this case the horizontal components converged to something very close to the final answer in about two and a half hours whereas the vertical component doesn’t seem to have fully converged even in eight hours.

The 95% confidence levels for the results reported in the summary file were about one cm for the combined horizontal components and 2.5 cm for the vertical component.  This was consistent with my best guess for the actual errors based on both RTK and PPP measurements made with multiple receivers and online services. I estimate the actual error being about one cm in the combined horizontal components and about 1.5 cm in the vertical component.

I did not include any antenna calibration corrections in my solutions since I am not aware of a calibration file available for the GPS-500 antenna.  This means my solutions will be for the location of the phase center of the antenna, not the geometric center.    In this particular experiment, since I am only using the results to compare with other results from the same antenna, the errors will cancel and can be ignored.   Normally though, this offset will an add additional error to the position measurements.  Ideally for accurate absolute measurements, a calibrated antenna would be used, in which case the calibration file can be specified in the solution and RTKLIB will apply the correction to remove this error.

Unfortunately the CSRS PPP single frequency results for the M8T data were much less accurate with about half a meter of error in the horizontal components and three quarters of a meter error in the vertical axis.

ppp2

I then ran a PPP solution with RTKLIB for both data sets using a configuration similar to what is recommended in this tutorial.   The Swift data produced a result with very similar accuracies to the CSRS result in the horizontal components but nearly five cm of error in the vertical component.  Note that the convergence times are longer in the RTKLIB solution.  It is likely that both solutions would have reduced vertical errors if I had run with a longer data set.  The typical recommendation for PPP solutions is at least two hours of measurement data but longer data sets will generally improve accuracy.  Here is a plot of the RTKLIB PPP solution for the Swift data.

ppp3

In this case I was not able to get an RTKLIB PPP solution for the M8T data because of too large residual errors.  In other cases I have got PPP solutions with single frequency data but the accuracy of the solutions has always been much lower than the dual frequency data.  I do not have a lot of experience with the PPP settings in RTKLIB so it is possible I am not getting the most out of RTKLIB.  I hope to dig into this side of things more in the future.

PPP is great for locating static receivers but if you need to track moving rovers, you will still want to use RTK or PPK solutions for that.  The ability to get accurate locations for your base station using a PPP solution though is a significant advantage of using dual frequency receivers rather than single frequency receivers.  This is particularly true  if your base station is not close enough to a CORS type reference station to get an RTK/PPK solution for your base station location.

There is a significant advantage in having two identical receivers for RTK/PPK solutions since it will give the maximum number of overlapping measurements to difference and will allow ambiguity resolution with the GLONASS satellites.   In this case the simplest configuration would be to use Swift receivers for both base and rover.  A less expensive alternative worth considering would be to connect a Swift receiver and M8T receiver to the base station antenna through a splitter and then use an M8T receiver for the rover.   You could then use the Swift receiver to find your base location and the two M8T receivers to find the rover location relative to the base position.

For me at least, this ability to locate the base with PPP would be the most compelling reason to justify the extra cost of the Swift receivers over the M8T receivers.

Hopefully, this time in my next post, I will actually get to looking at the moving rover case with the Swift receivers.  After that I hope to do a four way comparison between the M8T, and low cost dual frequency receivers from Swift, Tersus, and ComNav.  I met Andy from ComNav at the recent drone expo in Las Vegas and he was kind enough to lend me two of their receivers and antennas for a couple months to use for evaluation.  I also understand that Tersus has recently updated their firmware so I’m quite excited about all the different options becoming available in the low cost dual frequency receiver market.