Hi Jolyon & Cedric,
I’m thinking of using Thomas Pike’s saliency paper (10.1111/2041-210X.13019) as the basis for comparing two different visual systems (i.e. calculate saliency maps for the same image converted to two different cone-catch models). My understanding–okay, hope–is that I can basically get that from the LEIA chromatic and achromatic outputs somehow. I don’t mind taking the micaToolbox output and putting it through Matlab, but I’m wondering if any of the micaToolbox users or developers has thought about this problem and made any progress on it before I start reinventing the wheel.
I am a big fan of the Saliency method, and Tom Pike’s work on this. My student has recreated the code in ImageJ, but it’s not currently ready for general release.
You should at least be able to export the cone-catch images from ImageJ and get MATLAB to load them.
The LEIA output is potentially useful too. Once in MATLAB you might as well use Tom’s code though. I confess I haven’t used it myself as I gave up on MATLAB a long time ago in favour of open source.
I share your feelings on Matlab, hence the question…I suppose we’ll try that for now, or maybe just use the LEIA scores for a preliminary analysis as those have other technical & biological appeals for our question. Thanks Jolyon!