The ambiguity resolution ratio (AR ratio) threshold is used by RTKLIB to determine if there is enough confidence in an integer ambiguity resolution solution to declare it as a fixed solution. If not, the float solution is used instead. More specifically, the AR ratio is the ratio of the sum of the squared residuals of the second-best fixed solution to the sum of the squared residuals of the best fixed solution.

The higher this ratio is, the higher the confidence that the best solution is the correct solution. However there is no simple way to determine what the optimal value for this threshold should be. In practice, it is usually set based on empirical results, and typical values vary from 1.5 to 3.0. RTKLIB uses 3.0 as a default value for this threshold which is a fairly conservative value, especially when large numbers of satellites are available for ambiguity resolution. If too low a value is chosen, false fixes will be common, but too high a value will lead to unnecessarily low fix rates. In general, if the underlying GNSS model strength is higher, a lower threshold can be used. The GNSS model strength will be affected by many parameters including the number of tracked satellites, the measurement precision, the number of measurement epochs , the number of used frequencies, and many others. If the model strength is high, then this can be reduced, if the model strength is very low, the threshold should be increased.

Teunissen and Verhagen make a compelling argument in this paper (and several others) that the AR ratio threshold should be adjusted not only for different solution environments but also on a epoch by epoch basis within a single solution. They argue that as the model strength varies, the threshold should be adjusted such that there is a fixed possibility of an incorrect solution. It turns out that the exact model strength is very difficult to calculate and that the relationship between model strength and the optimal threshold is also complicated. They propose a solution in which model strength is estimated from two parameters; (1) the number of satellites, and (2) an estimate of the ambiguity resolution failure rate based on the covariance matrix of the float ambiguities. These two values are then applied to a set of lookup tables, along with a desired failure rate, to calculate an optimal threshold value. The lookup tables are generated using Monte Carlo simulations of various models and model strengths. They named this technique the fixed-failure rate ratio test (FFRT).

They provide examples of Matlab and python code to demonstrate their algorithm which can be downloaded from here along with some documentation to explain how it works. For anyone who wants to understand their methods more completely, this is a very useful tool.

One of the things they predicted in their paper from a number of years ago, is that as the number of available satellites increases with the newer constellations, the need to move away from a fixed threshold will become greater. I believe with the introduction of Galileo and Beidou, and the newer low cost dual-frequency receivers, we have reached that point where a fixed threshold is no longer adequate. The problem is that especially for a dynamic rover scenario, the number of available satellite pairs can vary from just a handful in obstructed sky conditions to several dozen in open skies. This results in a huge variation of model strength in a single solution. If a fixed threshold is chosen to be adequate for the obstructed sky conditions, it is much too conservative for the open sky conditions.

So, the next step was to try and add some of this capability to the demo5 version of RTKLIB. I decided early on to reduce the complexity of the solution by making the goal simply an improvement over the existing algorithm rather than trying to achieve an optimal solution. The first step in this direction was to replace the lookup tables with a set of cascaded polynomial coefficients to approximate the contents of the tables with a much smaller set of numbers. I then rewrote the FFRT pieces of the above mentioned python code into the RTKLIB C code. The example code also allowed the user to choose one of two targeted failure rates. In order to further simplify the solution, I eliminated the higher fixed failure rate target option and only supported the 0.001 failure rate target.

So far, so good. I was able to get very similar results between the python example code and the RTKLIB code. Unfortunately I was not completely happy with these results. The threshold adjustments based on the number of satellites seemed to behave well but the adjustments based on the covariance matrix of the float ambiguities was often too optimistic and would cause the threshold to be adjusted too low. I suspect the reason why is very similar to the reason why the accuracy estimates of the kalman filter also tend to be very optimistic. The math is based on the assumptions that the input measurements all have zero mean errors and are uncorrelated with each other and in time, none of which is true in the real world. To some extent, the lookup tables should compensate for this, but that assumes the error characteristics in their simulations matched the actual errors which is difficult to do for a general case.

My conclusion was that the ambiguity covariance matrix is not sufficient in many cases to accurately estimate the remaining model strength after the number of satellites component is accounted for. Fortunately however, I believe that the largest variation in model strength within a solution is generally going to be caused by the variation in number of available satellites, and not from the other components. I ended up with, at least as an initial implementation, one where the threshold is adjusted for the number of satellites but the remaining components of the model strength are still specified by the user. To keep the user interface more compatible with the existing RTKLIB code, I used the existing convention that model strength is adjusted by adjusting the AR ratio threshold directly. However, now it is the nominal AR ratio threshold that is adjusted rather than a fixed threshold. The nominal AR ratio threshold is defined as the threshold for the case where there are eight satellite pairs used for ambiguity resolution. If the number of satellite pairs in the ambiguity set increases, the threshold is reduced, and if it decreases, the threshold is increased.

The plot below attempts to show how this method correlates to the FFRT algorithm given a fixed target failure rate of 0.1%. I won’t bother to describe what Pf_ILS is in detail here, it is enough to know that it is the previously mentioned estimate of model strength calculated in FFRT from the float ambiguity covariance matrix.

The three trajectories of dots in the plot below are calculated in the FFRT python code using the lookup tables for specified model strengths (Pf_ILS) of 0.005, 0.035, and 0.85. The x-axis is the number of satellites used in ambiguity resolution and the y-axis is the corresponding AR ratio threshold. The three colored lines are the equivalent values calculated in RTKLIB for the nominal AR ratio thresholds that correspond to those model strengths.

One detail worth mentioning is that the FFRT method uses the number of satellites as input, while the RTKLIB implementation uses the number of satellite pairs as input. They are aligned in the computations for the one frequency, one constellation case. Hence you can see that the nominal AR ratio thresholds in the legend appear when the number of satellites on the x-axis is nine (one reference satellite and eight differencing satellites). As the number of reference satellites increases with multiple constellations and frequencies the difference between number of satellites and number of satellite pairs will increase.

I’ve introduced these code changes into the demo5 b34c code which is available from the rtklibexplorer Github page or the download section of rtkexplorer.com

I’ve added two new config parameters to RTKLIB in an attempt to make this feature as flexible as possible. The two new features, minimum AR ratio, and maximum AR ratio are highlighted in the image below from RTKPOST. These allow the new feature to be completely disabled (the default), partially enabled, or fully enabled. The AR ratio threshold for each epoch is first calculated as shown in the plot above based on the nominal AR ratio threshold and the number of satellite pairs. It is then clipped to the minimum or maximum values if it exceeds either one.

The default values are 3.0 for minimum, maximum, and nominal. This will force the ratio to be always equal to 3.0 and is equivalent to the previous single default value of 3.0. To completely enable the feature set the minimum value to something less than or equal to 1.0 and the maximum value to 10.0 or greater. This means the calculated threshold will never be clipped. In the example above (min=1.5,nom=3,max=10), I have partially enabled the feature. The calculated value from the blue line in the above plot will be used as long as it is greater than or equal to 1.5. If the number of satellite pairs is greater than 25, the calculated threshold will drop below 1.5 but the final value will be set to 1.5 since it will be clipped by the minimum threshold.

These changes make the largest difference when running real-time solutions with the u-blox F9P or other dual frequency receivers with multiple constellations since the number of satellite pairs used in ambiguity resolution can be quite large and time to first fix is more critical. In this case it can significantly reduce time to first fix for the nominal case while still protecting against false fixes if only a few satellites are available.

Be aware however that a variable threshold will not always improve fix rate and in some cases will lower it. Any time the number of available satellite pairs is below eight, the AR ratio threshold will increase. This will reduce the number of false fixes but will also reduce the fix rate. Overall, though, it should give a more constant failure rate over the full range of satellite counts.

In my next post, I will compare the internal F9P real-time solution with the RTKNAVI real-time solution using this feature.

Hi Tim, Excellent work!! I keep reading your articles. The below work really solved most of my issues. Currently, I met this issue again with more and more strong model, So I want to do some research of the source of the below work. but the URL could not work now. do you have any plans to improve this?

/* poly coeffs used to adjust AR ratio by # of sats, derived by fitting to example from:

https://www.tudelft.nl/citg/over-faculteit/afdelingen/geoscience-remote-sensing/research/lambda/lambda*/

static double ar_poly_coeffs[3][5] = {

{-1.94058448e-01, -7.79023476e+00, 1.24231120e+02, -4.03126050e+02, 3.50413202e+02},

{6.42237302e-01, -8.39813962e+00, 2.92107285e+01, -2.37577308e+01, -1.14307128e+00},

{-2.22600390e-02, 3.23169103e-01, -1.39837429e+00, 2.19282996e+00, -5.34583971e-02}};

LikeLike

The link is actually fine, but the ‘*/’ at the end is the comment delimiter and is not part of the link. Sorry about that, I’ll update it in my next code commit. By the way, it links to the same document as the link in the post above.

LikeLike

Hi Sir,

In the rtkpos.c file where you solve the ambiguities in relpos function, you wrote:

if (stat==SOLQ_FLOAT) {

/* if valid fixed solution, process it */

if (manage_amb_LAMBDA(rtk,bias,xa,sat,nf,ns)>1){…}

while in the original version (RTKLIB2.4.3), this part is wrote as:

I didn’t find where the variable ‘stat’ was set as ‘SOLQ_FLOAT’ before the AR function is called.

did I miss reading some critical code?

Thanks for reading and expecting your reply!

LikeLike

Yes, you are correct, stat was not explicitly being initialized in the b34f code. This is fixed in the latest code in the repo and will be in the b34g code. I believe the compiler was implicitly initialing this variable so I don’t think it affects the code functionally but it was sloppy code. Please check the most recent code and see if that addresses your concern.

LikeLike

Dear Tim, I also studied the paper of Teunissen and Verhagen. In my precision fork I replaced the simple residual ratio test with an F-Test, which judges for a FIX solution if the probability based on the degrees of freedom (number of sat pairs -2) is above 85%. Although from the paper of Teunissen and Verhagen I know, that the F-distribution does not match perfectly to the statisticial distribution to be applied. Using an M8T, a Trimble III and GPS Sats only, I was able to measure an 28km baseline for more than twelve hours with astonishing, outstanding precision.

Observed Standard deviations were = EW=0,17cm, NS=0,16cm, H=0,3cm

However, monitoring GALILEO satellite system under the same conditions was more than 10 times worser than using the GPS satellites. This is confusing to me because the atomic clocks on the Gallileo satellites are supposed to be more than 10 times more accurate than those of the GPS satellites.

LikeLike