Hello! It is possible to convert my .mspec image to a cone-catch image using the models of visual systems provided by micaToolbox, without generating a specific cone mapping model for my camera and being able to use it to run the analyzes using QCPA and compare the results between different images? More specifically dealing with human visual models. I imagine that generating a model for a specific camera is ideal, but I would like to know if, in the absence of equipment to generate this model, using the models from the software itself would be a viable option. Thanks in advance!
From what it sounds like, creating a cone catch image with the help of a colour standard would be the appropriate way to go. http://www.empiricalimaging.com/knowledge-base/chart-based-cone-catch-model/