Evaluating results: Moving base vs fixed base

 

In the last few posts, I have been focusing on the relative position solution between two receivers where both are moving but fixed relative to each other rather than the more typical scenario where one receiver is fixed (the base) and the other is moving (the rover). As described in an earlier post, I do this because it makes it possible to accurately measure the error in the solution when we don’t have an absolute trajectory reference to compare to.

Normally, though, from a functional viewpoint, we are more interested in the second case where one receiver is fixed and the other is moving. In this post I will spend a little time looking at how we can evaluate the results for that scenario.

Since we don’t know what our exact trajectory was when we collected the data, we can not measure our solution error directly like we can in the first case. However, we can look for consistency between multiple solutions calculated for the same rover data, but with varying input configurations and differing base data and satellites. The solution should not vary by more than our error tolerance no matter how much we change the input configurations.  If we believe a solution is accurate to a couple of centimeters then it would be very disturbing to find it moved 50 cm if we used the same rover data but adjusted something in the input configuration, satellites, or base data, particularly if the differences occur for epochs with fixed solutions.  We cannot know which of the two solutions is incorrect but we do know that at least one of them must have large undetected errors in it, which means we really can’t trust either solution. Of course, if the two solutions are equal it does not guarantee that they are correct, they may just both be wrong in the same way. However, the more we can vary the input conditions and still get the same answer, the more confident we will start to have in the solution.

To test the consistency of solution for the previous data set, I processed the data from the two receivers in a different way. Instead of measuring the relative distance between the two receivers, I measured the distance between each receiver and two independent base stations, giving me a total of four solutions, two for each Ublox receiver Since the two base receivers are in different locations the solutions will be different, but if we ask RTKLIB to calculate absolute position rather than relative distance then we can compare the two directly. By changing the out-solformat parameter in the input config file from “enu” to “xyz”, RTKLIB will add the absolute location of the base station to the solution, converting it from relative to absolute. With this change, the two solutions calculated from different base stations should be identical.

For the two base receivers, I used data from two nearby stations in the NOAA CORS network, TMG0 which is about 5 km away from where I collected the data and ZDV1 which is about 13 km away. TMG0 data includes Glonass satellites, ZDV does not so this also helps differentiate the two solutions by giving them different but overlapping satellite sets. To further increase the difference between the two satellite sets, I changed the minimum elevation variables in RTKLIB for the TMG station from 15 degrees to 10 degrees while leaving them at 15 degrees for the ZDV station. This will add the low elevation satellites to the TMG solution.

Now we can compare the two solutions for each Ublox receiver, each calculated with different base stations and different sets of satellites. If everything is working properly they should be identical within our expected error tolerance. Since our baseline distances between base and rover have increased from roughly 30 cm in the previous analysis to 5-13 km in this case we will expect the errors to be somewhat larger than they were before.

The plots below show the results of this experiment. The plots on the left are from the Ebay receiver and the plots on the right are from the CSG receiver. The first row of plots show the ground tracks for both solutions overlayed on top of each other, the second row shows position in x, y, and z directions, again for both solutions overlayed on top of each other. The x and y directions are indistinguishable at this scale but we do see some error in the z direction. The third row of plots shows the difference between the two solutions. We can interpret these differences as a lower bound for our error. As we would expect with the longer baselines, the errors are somewhat larger than before, but at least the x and y errors are generally still quite reasonable, most of them falling between +/- 2 cm for epochs with fixed solutions (green). The z errors are somewhat larger but fortunately we usually care less about this direction. I have made no attempt to reduce the ephemeris errors, either through SBAS or precise orbit files. Presumably, at these longer baselines, doing so would help reduce the errors.

sol18a

sol18b

sol18c

So, how do we interpret these results? I believe this analysis is consistent with the previous analysis. It is not as conclusive since we have no absolute error measurement, but since the solutions are more similar in format to how they would be done in practice, it should give us more overall confidence in our results.

Maybe more important, it also provides a baseline for using similar consistency analysis on other data sets where we often won’t have the luxury of having two receivers tracking the same object.

Let me add a few words to help put these results in context in case you are comparing them with other data sets.  This data was all taken in very open skies with fairly low velocities (<6 m/sec) so it is relatively easy data to get a good solution.  It will be more difficult to get similar results for data taken with obstructions from trees or buildings, or that includes high velocities or accelerations or high vibration.  In particular, there were very few cycle slips in this data.   None of the modifications described here will help if the data has a large number of cycle slips.

Also remember that this data was taken with very low cost hardware (the Ebay receiver cost less than $30 for receiver and antenna combined) so the data will be lower quality relative to data taken under similar conditions with more expensive hardware, especially higher quality antennas.  

That is the goal of my project, though, to get precise, reliable positioning with ultra-low cost hardware under benign environmental conditions. 

 

 

 

 

Advertisements

Improving RTKLIB solution: Fix-and-Hold

Integer ambiguity resolution is done by RTKLIB after calculating a float solution to the carrier phase ambiguities to attempt to take advantage of the fact that the carrier phase ambiguities must be integers. Constraining the solution to integers improves both the accuracy and convergence time.

RTKLIB provides three options for the integer ambiguity resolution: instantaneous, continuous, and fix-and-hold. “Instantaneous” is the most conservative and relies on recalculating the phase bias estimates every epoch. “Continuous” is the default option and uses the kalman filter to estimate the phase biases continuously over many epochs. In general, this provides much less noisy estimates of the phase biases and is essential for working with low cost receivers. The downside of the continuous solution is that if bad data is allowed to get into the filter outputs it can remain for multiple epochs and corrupt subsequent solutions, so some care must be taken to avoid that.

Fix-and-Hold” takes the concept of feeding information derived from the current epoch forward to subsequent epochs one step farther. In the “Continuous” method, only the float solution is used to update the phase bias states in the kalman filter. In the “Fix-and-Hold” method, an additional kalman filter update is done using pseudo-measurements derived from the fixed solution. This is only done if the fixed solution is determined to be valid. All of this is explained in greater detail in Appendix E.7(5) of the RTKLIB user manual.

In general, taking advantage of all the information we have this epoch to improve the solution for the following epochs seems to be a good idea, especially given the high levels of noise in the low-cost receivers and antennas. Even more so than in the “Continuous” method, however, we need to be careful about feeding erroneous data into the kalman filter which may take a long time to work its way out.

Testing the “Fix-and-Hold” method on a few data sets shows the results we would expect, the solution usually does improve but can occasionally cause it to have difficulty recovering from a period of bad data where we have corrupted the phase-bias estimates.

To minimize the chance of this happening, we need to do a better job of validating the solution before feeding it forward. Looking at a couple examples of bad data, what I find is that the errors are introduced when solutions are derived from a very small number of valid satellites. The current integer ambiguity validation is based only on the ratio of residuals between the best solution and the second best solution and does not take into account the number of satellites used to find the solutions.

I have introduced two new input parameters, “minfixsats”, and “minholdsats” into the code. These set the number of satellites required to declare a fixed solution as valid, and the number of satellites required to implement the fix-and-hold feature. I have found setting the minimum number of satellites for fix to 4 and fix-and-hold to 5 seems to work quite well for my data, but I have made them input parameters in the config file to allow them to be set to any value.

I won’t include all the code here, but it is available on GitHub at https://github.com/rtklibexplorer/RTKLIB on the demo1 branch

The two key changes are:

For minfix, in resamb_LAMBDA() from:

if ((nb=ddmat(rtk,D))<=0) {
errmsg(rtk,”
no valid double-difference\n”);

to:

if ((nb=ddmat(rtk,D))<(rtk->opt.minfixsats-1)) { // nb is sat pairs
    errmsg(rtk,”not enough valid double-differences\n”);

and for minhold, in holdamb() from:

if (nv>0) {
    R=zeros(nv,nv);
    for (i=0;i<nv;i++) R[i+i*nv]=VAR_HOLDAMB;

    /* update states with constraints */
    if ((info=filter(rtk->x,rtk->P,H,v,R,rtk->nx,nv))) {
    errmsg(rtk,”filter error (info=%d)\n”,info);
}

to:

if (nv>=rtk->opt.minholdsats-1) { // nv=sat pairs, so subtract 1
    R=zeros(nv,nv);
    for (i=0;i<nv;i++) R[i+i*nv]=VAR_HOLDAMB;

    /* update states with constraints */
    if ((info=filter(rtk->x,rtk->P,H,v,R,rtk->nx,nv))) {
    errmsg(rtk,”filter error (info=%d)\n”,info);
}

Again, the changes to the position solution are subtle and difficult to see in a plot, but are much more obvious in the Ratio Factor for AR validation. A higher ratio represents greater confidence in the solution. The green is before the changes, the blue is after. The RTKLIB code limits the AR ratio to a maximum of 1000.

ar2

As you can see, there is a very significant increase in the AR ratio with the change.  The improvement is coming from switching the ambiguity resolution from continuous to fix-and-hold, not from the code changes which we made to try and reduce the occasional occurence of corrupting the filter states with bad data.

Let’s also look at the metrics described earlier for both this change and the bug fix described in the previous post. On the x-axis, 1-3 are from before the two previous changes, 4 is with the bias sum bug fix and 5 is with fix-and-hold and the min sat thresholds for fix and hold.

metrics2

From the metric plots you can see that at this point we have maxed out the % fixed solutions and median AR ratio metrics and have got the distance metric down below 0.5 cm so I will stop here and declare success, at least for this data set.

If you’d like to try this code with your data, the code for all the changes I have made so far are available at https://github.com/rtklibexplorer/RTKLIB on the demo1 branch.

If anyone does try this code on their own data sets please let me know how it goes. I’m hoping these changes are fairly universally useful, at least for open skies and low velocities, but the only way to know is to test it on multiple data sets.

Update 4/17/16:  Enabling the “fix-and-hold” feature on other data sets has had mixed success.  I still see “fix-and-hold” locking to bad fixes and corrupting the states.  It looks like the extra checks I added are not sufficient to prevent this.  So, for know,I recommend turning off this feature, or if you do use it, be aware that it can cause problems.  I hope to take another look at this soon.

Improving RTKLIB solution: Phase-bias sum error

While working through the code to add comments as described in the last post, I stumbled across what looks like another bug. This one is more subtle than the previous bug (arlockcnt) and only has a small effect on the results, but still I thought it was worth discussing.

During each epoch, RTKLIB estimates the phase-bias for each satellite using the differences between the raw carrier-phase measurements and the raw pseudorange measurements. In a perfect system, these should always differ by an integer number of cycles because the carrier-phase measurement have an uncertainty in the number of cycles whereas the pseudorange measurements do not. These phase-bias estimates are then used to update the kalman filter.

Before updating the kalman filter however, RTKLIB calculates the common component between all satellites and subtracts this off of each phase-bias states (the kalman filter outputs) My guess is that this code is left over from the GPS-only days where all satellites operated at the same frequency since the estimates are all in units of cycles. As long as the frequencies of each satellite are identical, it is fine to add the cycles from one satellite to another, but this doesn’t work anymore once the satellite cycles are of different frequencies.

My code modification simply converts the biases to meters first using the wavelength for each satellite before summing them and then converting back to cycles for each satellite. The changes all occur in the udbias() routine. The following lines of code:

1) bias[i]=cp-pr/lami;

2) offset+=bias[i]-rtk->x[IB(sat[i],f,&rtk->opt)];

3) if (rtk->x[IB(i,f,&rtk->opt)]!=0.0) rtk->x[IB(i,f,&rtk->opt)]+=offset/j;

become

1) bias[i]=cp*lami-pr;

2) lami=nav->lam[sat[i]-1][f];
    offset+=bias[i]-rtk->x[IB(sat[i],f,&rtk->opt)]*lami;

3)   if (rtk->x[IB(i,f,&rtk->opt)]!=0.0) {
       lami=nav->lam[i-1][f];
      rtk->x[IB(i,f,&rtk->opt)]+=offset/lami/j;
     }

This improves the solution slightly but is most obvious when the receiver does a clock update. Since the receivers use inexpensive crystal clocks, unlike the satellites which use atomic clocks, there is a continuous drift between the two. At some point, when the difference between these two clocks gets too large, the receiver will update its clock to remove the error. Most of the time the error introduced by this bug is too small to easily pick out of the noise, but when there is a large adjustment from the clock correction it becomes quite obvious and shows up as a large spike in the residuals. Adding this code change eliminates the spikes from the residuals.

While exploring this issue, I modified the code that outputs to the state file to also output the phase-biases since I felt they provided a fair bit of insight to what was happening in the solution. What I found however when looking at these phase-bias outputs is that they are dominated by this common term (presumably some sort of clock bias) and it is difficult to see the behavior of the individual phase biases. To avoid this problem I made another modification to the code. Instead of adding the common component to every phase-bias state, I saved it in a separate common bias variable and used this when initializing phase-biases for new satellites. Since all position calculations are done with the difference of the phase-biases and not the phase-biases themselves, this change does not have any effect on the output solution. It does however remove the large common variation from the phase-bias states and makes them easier to analyze.

Here are the residuals before and after the code modification (zoomed way out to see the spike).

res1ares1b

The position solution doesn’t change much, the improvement is more obvious in the confidence of the solution which can be seen in the Ratio Factor for AR validation. Here it is before (green) and after (blue) the code modification.

ar1

As you can see there is a small but persistent increase in the AR ratio with this change.

Anyone else run across this issue and solved it in a different way? Or believe this is not a bug, and is actually the intended behavior? Or possibly know of other cases where cycles of different wavelengths are not handled properly in RTKLIB?

RTKLIB: Code comments + new Github repository

If you’ve spent any time perusing the RTKLIB source code you will surely be aware that it is very much what I would call “old-school” code. Comments are sparse and variable names tend to be short and not always very descriptive. The upside is that the code is quite compact, significant pieces of it can be seen on a single page which can sometimes be quite nice. The downside is it can be difficult to follow if you have not been poring over it for a long time. To help myself understand the code I have added a fair bit of commenting to most of the code used to calculate the carrier-phase solution, which is really the heart of the RTKLIB kinematic solution.  This is mostly in the rtkpos.c file.

The changes are too extensive to spell out here so I have forked the RTKLIB code in Github and posted my commented code there. If you are interested, you can download it from the demo1 branch at https://github.com/rtklibexplorer/RTKLIB.  

All the previous code changes are also there as well as a couple I have not yet posted about.  The receiver data I have been using for these posts is also there in the rtklib/data/demo1 folder.  The solution file updates for VS2010 c++ are in the rtklib/app/rnx2rtkp/msc folder and the modified executable is in the rtklib/bin folder.