I first learned to fun QCPA a few months back and used a photo as my trial photo (a bird beak with orange coloration). The output after converting to presentation image was this:
Recently I went through the framework again after creating a better cone catch mode using a photo of my colorchecker with better lighting, then analyzing the photo used in the same presentation image above, but it gave me only 1 cluster instead of multiple (I assumed because of the different cone catch model), but the output image had a very low number of pixels, which makes me worried I’m doing something wrong:
Also, when making the presentation image this time, I was only given the option of the red input channel in the “Colour and False-Colour Image Creator” window. As far as I know, the only thing that changed between the two iterations of analyzing this photo was the cone catch model. Can you suggest a way to fix the low pixel count?
The image dimensions are specified by the acuity/angular width/distance/pixels-per-millimetre settings in the acuity stage. I imagine you’re just using a different setting there. The settings you used for your previous attempt might have been saved (the settings you use are recorded in the image filename – very useful when looking back at old images).