I am having trouble generating results from the QCPA without using methods relying on the RNL model. I am working with a turtle, and unfortunately turtles seem to be one of the few situations where RNL is not recommended (van den Berg 2020 specifically mentions Rocha et al. as a circumstance to avoid RNL). However, I am not sure where that leaves me with getting values from the framework.
I think that I am after values like patch Dmax and patch luminance, since I am measuring arm stripe color to relate to immune function, and a reviewer suggested that I should be after chromatic and achromatic contrast. I am not sure how to get those values from QCPA, since every time I uncheck the option for the RNL ranked filter and set clustering to an option other than RNL cluster, I don’t get any values in the output, and sometimes I get errors. Also potentially relevant is that I have a specific ROI selected of the arm. Do you know how I could get these values for turtle arm stripes? Or if I am just heading off in the wrong direction? Thank you!
As always, I appreciate your time!
Happy Friday to you too! Yes, indeed, as far as I am aware, current research points towards significant deviations from our ‘standard’ assumptions on colour opponent processing in turtles. As such you are exactly right in considering the Dmax parameter range which has been designed for precisely the case of an unknown opponency system. I think your errors can be explained by the QCPA’s need to work with a segmented image. As such, given that image segmentation is either done using the Naive Bayes classifier or the RNL clustering, those are your options. You should be able to obtain Dmax pattern parameters using the Naive Bayes classifier?
When I try the QCPA analysis without the RNL ranked filter and clustering with Naive Bayes on the selected ROI of the entire arm (which has the dark background and the red stripe), I continually get the error that the Image does not have an active selection, or “There are no results or ROIs – either open some suitable measurements or specify some ROIs”. Does this mean that the ROI can’t be clustered into separate sections of background and arm stripe? Using the whole photo is counterproductive, since it includes our hands holding the turtle. If I try using the whole photo with Naive Bayes, it clusters the whole photo into just two clusters, which doesn’t really make sense either. If the clustering won’t work, is there any way for me to specify the stripe myself by eye and calculate Dmax? While I recognize that the boundary I identify as stripe and what the turtle identifies won’t be the same given differences in vision, I also know that behaviorally they can see the stripe colors and distinguish them, so it doesn’t make sense that the stripe would cluster with the background arm color. Thanks for your help!