When georeferencing using the GDAL georeferencer the mean error (root-mean-square error) is the standard value to evaluate how much error will be produced using these points and algorithm. However, how big this error can/should be is highly dependend on the quality of the input data. E. g. when I have a high-resolution aerial image with 1 cm ground resolution, I would not accept an RMSE of 1 m, while for a scanned old map with a ground resolution of 25 m this would be a very good result. But – as far as I can tell – how much error the user is willing to accept as tolerable is generally up to the user.

However, this is not a very comprehensible approach.

Is there any rule of thumb or actual formula or table where the RMSE-values acceptable for certain resolutions etc. are listed?

If not, I would be interested in how you approach this problem/what is a acceptable error for you?

  • Keine Stichwörter