Hello! It is possible to convert my .mspec image to a cone-catch image using the models of visual systems provided by micaToolbox, without generating a specific cone mapping model for my camera and being able to use it to run the analyzes using QCPA and compare the results between different images? More specifically dealing with human visual models. I imagine that generating a model for a specific camera is ideal, but I would like to know if, in the absence of equipment to generate this model, using the models from the software itself would be a viable option. Thanks in advance!

Using QCPA without a cone-catch model
Cedric van den Berg Changed status to publish March 30, 2023