I’ve noticed that the results of reflectance RGB values obtained on the same photos (also same standards and the same ROIs) with the old version and the new version (micaToolbox-2.0.2) of micatoolbox are different: for example with the old verison a roi measures 1639.047 while with the new version is 27.16768505.
Does the two versions use different scales or standardization or linearization models ?
How can I read the output of the two versions?
Are they comparable?
Furthermore, I’ve noticed that the same photo (also same standards and the same ROIs) generates slightly different reluts in the decimal digits if measured two times…
The old toolbox used a 16-bit scale because in the old days we used to save TIFF images (these were huge files, and the new toolbox made them redundant). This meant the old images set 100% reflectance at 65,535. Though the processing was all done on 32-bit (floating point) images (as it still is).
Anyway, this caused lots of confusion. So now as standard all reflectance images set 100% reflectance (relative to the standard) at 100, and all cone-catch images set the same value at 1. Note that pixels can and often do go well above 100% (or 1 for cone-catch), e.g. due to specular reflectance.
In practice you can just divide your old measurement values by 655.35 to get them on a reflectance scale, or 65535 to get them on a 0-1 scale.
I hope that helps explain it! Fingers crossed this shoudl reduce confusion in the future.
Differences in the same thing measured twice is odd though. There’s a chance that the ROI selection you use is sub-pixel resolution when first made, but when the ROIs are saved they’re converted to the nearest pixel. This would mean that when you re-open the image you should always have identical results, but the first time you draw an ROI it might be slightly different. The differences should be tiny though!
Thaks Jolyon for your timely and fully effective response.
Yes the difference on the same thing mesured twice is very slightly, so is not a problem, but I just wanted to know why, and your answer is plausible.