I started a conversation on another board over this 'new' feature (2011, really).
The understanding of it seems mostly a bit fuzzy unless you are an electrical engineer.
I am not an engineer so my understanding is all relative to what I have been reading.
Basically a 'variant' sensor uses the ISO setting to augment the sensor sensitivity by raising the voltage to it and capture the result.
An 'invariant sensor' use the camera processor to modify the capture from a set ISO that really never changes. The value instruct the camera processor to process the capture brightness.
In case one (variant), we have been relying onto the sensor DR and various methods like ETTR, EBTR and ETTL and PP to achieve an acceptable result. Some of us even use uniWB in order to get a true histogram out of the camera.
In case two (invariant) we just need uniWB and PP. We still use the camera DR at its nominal/optimal value meaning that it does not change or degrade as it does in case one. I am not sure what this will do the color accuracy.
In short, shooting at the optimal DR ISO value is all that is needed and PP the image ourselves seems to be best.
Now, can anyone correct my understanding, and set me right because I am almost certain that I missed something.
When I think, the world ends.