I have a question regarding measuring natural illuminant spectra underwater. I see that in the sample data file there is a csv for “5m green water 400-700 nm”. Could you share the technique of how this was measured? I’m interested in doing something similar but would like to know if the illuminant spectra should be measured in the same viewing plane as the animal or if measuring downwelling irradiance is ok.
I’m not sure exactly how Cedric measured these irradiance spectra underwater, though I imagine using a cosine corrector and spectrometer.
The illuminant you measure should ideally match the illumination falling on the object of interest you’re measuring. On the sea floor things will normally be receiving light almost exclusively from above, so simply measuring the light in the upper nearly-half hemisphere (as with a cosine corrector) would make most sense. But if you’re dealing exclusively e.g. with pelagic conditions and sideways-facing surfaces, then measuring the light coming from the side would make most sense (i.e. turn the cosine corrector 90 degree to point towards the horizon). If the surface being measured is also not-at-all-diffuse then you might want to not use the cosine diffuser at all.
The grey standard you use should also then match the orientation of the measured illuminant.
You can certainly use the toolbox as-is to work with relative radiance (e.g. looking horizontally underwater). To convert to absolute radiance you will need to calibrate your camera against something of known radiance. Assuming you keep aperture and ISO constant then shutter speed (integration time) multiplied by the linear (non-normalised) pixel value will give you radiometric measurements. Just compare these to your measured radiance value to convert to absolute radiometric measurements. Let me know if you’d like more of an explainer on this. There are certain assumptions made here (e.g. about sensor performance), but in my testing this has worked extremely effectively.
Re. spectra at different depths – absolutely, if you measure these yourself or take published spectra at different depths. You just need to make a new model at each depth.
Just to clarify. If I load a multispectral image as a linear image and then convert to cone-catch can I calculate the radiance contrast of an object against its background (horizontally underwater) using the colour and luminance JND calculator, or are the equations not suitable for this? I have also noticed that when the linear image is converted to cone catch the resulting image is white for each slice… not sure why that is.
Thanks for the reply! I appreciate the advice. I’m curious, is it possible to compare an object against the background radiance of a horizonless landscape (e.g. open water) using the toolbox? Can radiance values be calculated using the cameras RBG response and the sidewelling irradiance data? Also, if I calculate how the irradiance spectra changes at different depths can this be simulated in the model?