I am performing a natural history study of a rare marine crustacean that exhibits unique color polymorphism. I was wondering if I could use approximations of illuminant and model spectra in micaToolbox, as opposed to empirically measured in situ spectra, to generate hypotheses regarding underwater detection of this animal.
To give you a bit of background, this animal is very difficult to observe in situ since they are so small, cryptic, and live in very cold and turbulent waters. I have spent the last several years finding these critters by SCUBA collecting their substrate habitat and sorting them out in a laboratory setting. I have managed to gather hundreds of images of their dorsal bodies, with a waterproof 20-point kodak greyscale in each image. Ideally, I would like to use light reflectance data from these images to hypothesize how their predators might perceive different color morphs in their natural environment. Just for your reference, images were taken in a field laboratory setting with a Nikon D7100 and a Venus Optics macro lens directly above a 30cm2 glass tank filled with water to a depth of ~5 cm. The animal (~0.7cm size) was placed in the middle of the tank with the color card. Since our field setup did not allow for vertical placement of the illumination, the flash units were place on either side of the tank in the same horizontal plane as the subject (~15cm from the subject on either side).
I was hoping to use micaToolbox to convert my images to cone-catch images before QCPA by generating a cone-mapping model using spectral sensitivities. While I cannot collect illuminant spectra data myself due to rough field conditions, there is published illuminant spectra data available for the geographic region and depth range of my field sites. However, I am concerned that using the D65 option for the model illuminant will be too inaccurate since I took my photos with artificial flashes in a laboratory setting (and horizontal plane). I am also concerned that the camera spectral sensitivity will be too inaccurate since I used a lens which is not listed in the toolbox, and do not have the resources to gather new spectral data from my particular camera/lens combo. The input data does not need to be completely accurate since I just want to hypothesize possible predator-prey detection, but it would be nice to be able to have a grain of confidence in the model results. This is one of those instances where I cannot gather new data and have to work with what I have, unfortunately.
What are your thoughts? Do you think I could squeeze some useful (albeit, speculative) data out of my images?
Please see my email response for a detailed answer. It does seem like you would need to have the spectral sensitivity of your camera-lens combo.