A couple posts back I mentioned that it should be fairly easy to measure the solution residuals in order to validate or disprove my somewhat arbitrary choice to increase the “stats-eratio1” input parameter from its default value of 100 to 300. So, I decided to go ahead and do it. Like many things, it turned out to be not quite as easy or as definitive as I had hoped it would be, but I will share in this post what I found.

For those of you who don’t have the definitions of all 99 different RTKLIB input options at the tips of your fingers, eratio1 is basically an estimate of how much more accurate the receiver’s carrier-phase measurements are than the pseudorange measurements. Specifically it is defined as the ratio of the standard deviations of the two. The standard deviations of the measurements are used by the kalman filter to weight the relative importance of each measurement.

RTKLIB has a couple of features that will help do this analysis. First of all, the residuals for each satellite for each epoch are recorded in the stats file. The residuals are the difference between the estimated distance from the receiver to each satellite (range) and the actual range measurements after adjusting them to remove errors. There are separate residuals recorded for both the pseudorange and the carrier-phase measurements.

What we are interested in is the measurement errors rather than the measurement residuals, which are slightly different from each other. The residuals are calculated as the difference from the e**stimated** ranges while the errors are calculated as the difference from the **actual **ranges. The estimated ranges in this case are derived from a best fit of the measurement data .

Fortunately, RTKLIB has a solution mode we haven’t talked about before, the “Fixed” mode which is specifically intended for doing this sort of residual analysis. It is similar to static in that it is assumed that the rover is stationary. In this mode, however, we give RTKLIB the exact locations of both the base and the rover and it then calculates the residuals based on the inputted rover location rather than the estimated rover positions. The position output does not vary from sample to sample as it does in “Static” mode and remains fixed at the value specified in the input parameters. In reality I’m not sure how much difference this makes for the level of analysis we are doing here. I actually ran the experiment twice, once in “Fixed” mode and once in “Static” mode and got fairly similar answers. The results I show here though are all from the “Fixed” mode solution.

To collect some data for this experiment, I set up three receivers within a couple of meters of each other, two M8T receivers and one M8N receiver. I chose a location fairly close to a house and also close to several trees to replicate a less than ideal situation since I’d like to optimize the input parameters for a more challenging environment. There is more margin in ideal conditions so tuning of the input parameters should be less critical then. I collected roughly four hours of measurements at a one second sample rate from each of three receivers and also pulled the same four hours of data from a nearby CORS reference station. This gave me several combinations of different receiver combinations and baselines to include in the analysis.

RTKLIB estimates measurement uncertainties as a function of satellite elevation and satellite system so I split my data up that way as well. I won’t bother to label each line in the plots below but each line represents one satellite system and one combination of receivers. The upper plot is for a short baseline, and the lower plot is for a long baseline. The x-axis represents satellite elevation. Each point is the standard deviation of errors from all epochs from all satellites in that system over a range of five degrees of elevation. The data below 15 degrees is not very relevant to this experiment since I usually set the elevation threshold to throw out those measurements. There is no SBAS data in the long baseline plots because the CORS station does not include the SBAS satellites.

So how we do interpret this data. As you can see from the plots, the ratio of the standard deviations varies from roughly 100 to 300 most of the time. That doesn’t help a lot since the purpose of this experiment was to help decide between 100 and 300. Maybe it should make everybody happy since if you like 100, you can probably use this data to justify that number and if you like 300, you could use this data to justify that instead. Still I find it useful to know neither answer is terribly wrong and that it is probably OK to adjust the value within this range to what you find works best. Sometimes in cases like this it makes sense to use a conservative value which could be defined as either 100 or 300, and sometimes it makes sense to use a mean, which in this case would be about 200.

I will probably change my default setting from 300 to 200 and use that unless I find that causes a degradation in my results.

Below I have also plotted the standard deviations for the two measurements since they are of interest as well, particularly the carrier-phase numbers since RTKLIB also estimates those based on the input parameters stats-errphase, stats-errphaseel and stats-errphasebl. I will leave analysis of this data to a future post as well as looking at independent estimates that are made by the Ublox receivers themselves.

In order to avoid cluttering the discussion above I left out some of the details of calculating the standard deviations, particularly the handling of non-zero means. The errors in the measurements range in frequency from very low to high. The very low frequency or possibly DC errors do not average to zero in four hours and show up as non-zero means over the full data set. I removed these means since I felt they were caused by factors not relevant to this experiment and they do not affect the standard deviations anyways. Other errors are higher frequency and do average to zero over the length of time each satellite spanned five degrees of elevation. Since this was the bucket size of my analysis, I did not need to do any mean adjustment for these.

There are also errors that are slowly varying and while they average out to zero over the full four hours, do not average to zero over the five degree buckets. These errors get under-estimated in the standard deviation since the slicing of the errors into five degree segments effectively acts as a high-pass filter to the standard deviation calculation. To compensate for this effect, instead of calculating the standard deviation of [x], I used [x -x]. For a zero-mean population the two calculations will give identical results but for a non-zero mean population, the second calculation will give a better estimate of what the standard deviation would have been without the high-pass filtering. The reason I removed the means of the full data set as described above is that even though they don’t affect the normal standard deviation measurement they would have affected this modified calculation.

I don’t believe any of these adjustments significantly affected the result, but I do believe they should improve its accuracy. This is somewhat subjective though and another person’s analysis would probably be at least slightly different.