I am getting a 10-fold difference in darks on occasion, twice now. Same temperature (-15C), same duration (1280s), same stack size (40s), same gain (1500). I always allow for 30min + of stabilization, and the cooling power about 50%. One set of darks was over multiple nights and there was variation between them. I had an over correction in my images, so I took another set in one night which did not have variation but were 1/10 the average values of the previous set (second time I had this weird behavior), and I had a huge under correction. Here is a link to the raw dark files: https://can01.safelinks.protection....W+CVR9gPyNSTZRSGx82fgeQajA7JrXvkk=&reserved=0 Windows 11 Pro Maxim 6.50 Firmware V15 dlapi.dll ......................... 2.8.0.0 DLAPIWrapper.dll .................. 2.8.0.0 ASCOM.DLImaging.Camera.dll ........ 6.4.23.0 ASCOM.DLImaging.CameraExt.dll ..... 6.4.23.0 ASCOM.DLImaging.FilterWheel.dll ... 6.4.23.0 ftd2xx.dll ........................ 3.2.14.0 ftd3xx.dll ........................ 1.3.0.4 wpcap.dll ......................... Not Installed packet.dll ........................ Not Installed Thx for your help.
Make sure you're in the exact same imaging mode. StackPro will dramatically change your pedestal level (1X to 16X the pedestal level, depending on exposure time and subexposure duration). What is the serial number of your camera?
Opps. I was taking flats (which are shorter than my sub exposure setting) and had it set to high gain instead of StackPro. Sorry about that. Still, I have 2 questions: 1) If when taking flats and i stay below the sub exposure setting, does it matter if i am in High Gain or StackPro? 2) Is the variation you see in the “multiple night” folder, which were all StackPro, normal? Should I be trying to take them all the same night? Thanks for your help.
There is no difference between StackPro and HighGain if your exposure duration is less than or equal to the subexposure length. The 4040BSI sensors are very fussy. I do recommend taking the darks and flats at the same time, so conditions are as stable as possible.
Thx for your reply. It's a big ask to take calibration images "live", but I get it. This would require me to take darks after my imaging sessions, when there will be some daylight leakage around the roof's perimeter. How good is the camera's mechanical shutter considering the 21-minute exposure time? I did an experiment. I selected from my 37 21 minutes darks over multiple nights, the lowest 15 statistically averaged (about 0.5% lower). I reran my project and the resulting OIII 45 X 1280s = 16-hour stack was much better, but not perfect. I have included a link to the 2 images. I know I am making a big ask too, because the target PUWE1 is apparently the faintest planetary known. The target is flying barely above the trees (noise), so to speak. I measured it at about 0.4%, which is on par with the variance in the darks, so I should not be surprised. This is testing the camera's limits for sure. Do you think taking more data will improve the contrast in this situation? What about imaging with some moon ... to be avoided? https://can01.safelinks.protection....WI65r00cLEePkLdXKWgX5ysYJcADUJy3I=&reserved=0
You're definitely pushing the limits! The shutter is designed for operations at night. There could be some scattering around the shutter vanes in bright light conditions. If you have a filter wheel, put a blank in the last slot and select that prior to taking your darks. I expect that combined with the camera's shutter would be very effective. I would avoid moonlight for sure, just because it adds photon shot noise to your data. How much you can tolerate is something you'd have to figure out empirically. You can evaluate the impact by looking at the background noise in your images. Back in 1998 Paul Boltwood won the Deep Field Challenge proposed by Bradley Schaefer in Sky & Telescope. Here's his image: https://apod.nasa.gov/apod/ap990414.html - it was 767 two minute exposures on a 16-inch telescope achieving magnitude 24.1. This was from his back yard, not a proper dark sky site, so pretty impressive. Paul ultimately produced a slightly better limiting magnitude by reprocessing his images, by writing some software to weight each image based on its quality prior to stacking. (I don't recall the quality metric he used, but it would make sense to measure the SNR of a specific star or stars.) He gained something like half a magnitude doing that. It suggests that there's some potential to improve things on the processing side. Due to its relatively high dark current for a CMOS sensor, the presence of onboard readout electronics that heat up significant during sensor readout, and a very large number of necessary electrical connections to the sensor, stabilizing the pixel temperature on these sensors is more difficult than a CCD. Despite going through major hoops to minimize any effects, we do see some small shift in baseline in different ambient conditions. It can show up when you really push things, like you are doing. In your data you can see a shift between two sets that have a 10C difference in heat sink temperature, suggesting a big shift in ambient conditions. (Just FYI, we measure the sensor temperature in a copper cold finger right under the pixels. There's an onboard temperature sensor but it's located in the readout area not the pixels, and it correlates worse than the cold finger temperature.) Anyway, I expect this is something you can look at in your data reduction.
That’s a deep shot, wow! If I want to get this object, I will have to do better. That includes taking darks and flats every (moonless) night. Thanks for your help.