Has there been any progress made on the issues with the quadrant pattern. It really is distracting to the images. I've been taking 400 second LRGB images and 600 second frames with narrowband images, all using High Gain Stack Pr0 . It helps some but after aligning and stacking the quadrant is still obvious. The imaging system is an AP3600, ASA600 24" f7 RC, and SBIG AC4040BSI. Any thoughts on this. The system is remote at SRO and using MaxImDl version 6.5 .
We have an update to the GSENSE Calibration tool in the next release, which should be out shortly. We've streamlined the tool a bit, dropping the built-in camera sequencing and explaining in the manual how to set up an Autosave sequence to collect the needed calibration frames. That eliminates a whole bunch of things that could potentially go wrong when setting up the tool. There have also been some other tweaks to hopefully make it work more reliably. I'm not going to guarantee that it's perfect, yet, but it should be better.
Doug, is the GSENSE Calibration done when imaging or processing the data? If I'm reading this right is this to say that imaging modes such as High Gain Stack Pro have been changed? As the remote system is operated now at SRO the AC4040BSI is operated via ACP Scheduler using MaxIm v6.5 set with High Gain Stack Pro. I'm not clear what, if any, changes will be needed to improve the quadrant issue. Even with the extended exposures of 400 seconds for LRGB and 600s for NB.
GSENSE Calibration is applied alongside regular calibration. In fact it can be set up to happen automatically when you calibrate. We've reworked the feature for 7.2.0. We've removed the built-in calibration frame acquisition feature because it was confusing and redundant to what you can do via Autosave. Also I've done a major update to the manual to better explain what the tool needs and how to use it. I'll try to explain what is going on with the quadrant pattern, and how this tool can fix it. Large semiconductor chips cannot be printed in one go, because they are too big for the optics that project the patterns onto the silicon. They use a stepper machine that prints individual panels to form the entire chip. In the case of the GSENSE4040, there are four panels. That's where the quadrants come from. Why can we see the quadrants? Due to optical distortion in the projection lenses there is a small difference in the transistor geometry at the edge of the panels. In the GSENSE4040 this results in a subtle difference in low-level linearity in the readout amplifiers at the edges of the panels. That's why you get the + shaped quadrant pattern. It's a very tiny difference, but it shows up when you're doing astronomy because so much of the image is black or nearly black. The GSENSE Calibration tool tries to adjust things so all pixels have the same linearity curve. To do this, we take a series of flat-field images at different intensities. I recommend taking 5-10 images at each of the following settings: Bias Frame (no light, shortest possible exposure) Roughly 1% illlumination Roughly 2% illumination Roughly 5% illumination Roughly 10% illumination Roughly 20% illumination And so on up to 90% illumination From these images, the tool calculates an adjustment curve on a pixel-by-pixel basis. Then you can apply that curve to all images from the sensor. There is some temperature dependence so its best to take the calibration exposures at a similar temperature to the light frames.
What process is used to take these % illumination images? Can you give a step by step method please? I'm guessing that when we pass 20: we are doing increases of 10% more (30,40,50...90) This is totally new to me.
It's just like doing flat fields. Figure out what exposure gives you 90% or so, by trial-and-error and using the Information panel to read the average pixel values. Then just take a percentage of the exposure time. As an example, let's say you target 3400 ADU as your "bright" field, and that required a 10 second exposure. You could get a reasonable set of data with 0.2, 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10 second exposures. Set up Autosave to do that.
I'm certain this will expose my ignorance of this camera but wouldn't your ADU be different depending on the image settings? By that I mean I use High Gain Stack Pro so that ADU will be on the combined image correct? Using a different setting would result in a different value wouldn't it? As is I've been doing all LRGB exposures at 400s and NB at 600s using that mode. My sky flats frames have been based of 600 ADU. I'm sorry to admit it but I totally lost here.
Another issue that I've thought of is how does someone control the light amount via a remote setup? This rig is at SRO in a shared building. There isn't a light panel or source being used with this 24" scope open truss design.
LAst question for now I hope, is there any difference in GSENCE from v6.5 and 7.2? 6.5 has been being used at SRO and license is still very active but would prefer to stay w/6.5 for now until this is sorted.