We have a new AC4040 BSI VIS-NIR. Testing has shown significant nonlinearity in high gain and low gain modes. Images were taken with T setpoint of 0C. Discounting short exposure times, we see greater than 1% deviation from linear. The residuals are in the opposite directions for the two gain modes, so the problem is not a nonlinear ADC. At short exposure times, we see a divergence that superficially looks like a fixed offset between recorded EXPOSURE in the fits header and an actual exposure time, but it is not removed by fitting a zeropoint term to the linear fit (a0 + a1*t). These two may be unrelated or two different manifestations of one physical problem. In any event, we should be seeing 0.1% nonlinear residuals to allow proper stellar photometry (yes, I know, actual 0.1% photometry is nearly unattainable, but students should aim for better than 1%, and we don't want nonlinearity to overwhelm their error budget). 1% or more cannot be considered "linear".
The converter subsystem in CMOS sensors is of course entirely built into the sensor itself. Gpixel's sensor specifications are as follows: 1.4X gain mode typical 0.2%, maximum 1% 16.5X gain mode typical 0.7%, maximum 1% (Just FYI that Gpixel specifies these numbers using HDR mode, where the sensor simultaneously uses 8 high gain A/D converters and 8 low gain A/D converters.) Some questions: What is your camera serial number? What gain settings did you use for the measurements? (gain is adjustable for both HG and LG modes) Can you provide your raw test data so we can review? (big files - will require dropbox or similar)
Thanks. Serial number AC4040BM-22060905 Gains were the default High and Low gain settings, resulting in numeric values 0.388364 and 9.772891 (plus change) respectively, as reported in the FITS headers. I will arrange to post the raw data, indeed rather large. FYI, all images taken with 2x2 co-add set in MAXIM, resulting in 2048sq images, and during analysis (not in MAXIM) the reference signal levels were the average over the central area 512:1536 in both dimensions.
So what exact function did the linear regression use? Y = a0 + a1*X, or just the linear (a1) term? Were the signal averages weighted according to their inverse amplitude, or unweighted? Are the two values (2545.545, 128.7047) the two fitted terms? If there is an offset (the a0 term) this is a serious problem, the system: camera + readout, it corresponds to a deviation from linearity. If this corresponds to a fixed latency in the stated exposure time (which should be measured in ns), it should be corrected before we see the data and the header values, but if not then we should at least know about it. However, I would like to see your excel sheet, so that I can see at least those details. I am not convinced the values you are working with are the same as the ones I am working with, for whatever reason.
If you are normalizing the error to the max signal, that is not a useful way to measure nonlinearity on a high dynamic range measuring device (and if you are, it is conventional to state it clearly: "relative to full scale"). It is clear that your nonlinearity percentages are low (compared to mine) because of that normalization. My other questions still stand.
Since this is a CMOS device, the linearity characteristics are intrinsic to the sensor - the analog electronics and A/D converters are built into the device. The sensor manufacturer's specifications are 0.7% typical in gain mode 3X up to 39 ke-.
Still waiting for my questions to be addressed. I now have two (conflicting) answers from GPIXEL about how linearity is measured, and I want to know the details of your linearity measurement. Seeing your excel file would be most helpful.