I am working on getting a new OTA/Camera system going on a multi-user robotic system focused primarily on various forms of photometry but also imaging. This camera presents some challenges in our application. First, is the requirement to have darks that are tailored to every light exposure. It is possible but will require some additional work. I would be interested in a short explanation as to why that is necessary with a CMOS camera and not a CCD, where one can create bias frames and let the software scale the darks. Experimenting with different exposures to make flats using a flat panel also raises some questions. Having not thought through the implications for changing exposure length, I took a series of flats at 1" intervals and noticed that the odd-numbered exposures showed a banding effect. Since each subframe is 1" I could imagine that this is picking up the oscillation from the dimmed flat panel. But the mystery is why these would only show up in the odd second exposures and why they would not resolve as images were stacked. Again, this is really a curiosity, because as I get my head around this system, I realize that there is really never going to be a need to take a less than 16" exposure because it is really better thought of as a 1" exposure in terms of saturation etc. Thanks Bruce
Sorry for the follow-up, but getting my head around this brings up another question. In my mind, a 16” exposure is longer than a 15” one, which is longer than a 14” one etc. While yes, they are longer in terms of time required, basically they are all actually 1” exposures in terms of saturation. So, a 360” dark is really only 22.5” long in terms of saturation. If we wanted a dark in which the subframes were 360” long, we would need a 5760” dark. Thirty of those would take a while, which is what I would normally take to make a master. But in this system, every image is itself a stack of sixteen images. Would two HSP darks, therefore, not be equivalent to 32 normal darks?
Glad to hear you are getting things going. Suggestions: Make sure you have the latest software, drivers, and firmware in the camera. We updated this on Friday of last week, and so you may need to bring your setup current. MaxIm - Help... Check for Updates will take you to a page for 6.30 Camera Firmware - DL Config Utility will show the version. Current AC2020 Firmware Revision 3 is here: https://diffractionlimited.com/aluma-fpga-firmware/ DL Config - current version is 2.4.1.0, linked to on that page. Initial usage: MaxIm can be configured to set the readout mode. Set this for High Gain. It is the fastest way to get usable images. Images will be 12-bit. Saturation will happen around 3600 on a 4095 scale. (equivalent to about 56700 on a 65535 scale of a 16-bit CCD). Hi Bruce, Unlike CCD cameras which have passive pixels which have are influenced by substrate voltage (bias) and have dark current that can be approximated as proportional to exposure length, the sCMOS APS detectors are completely different technology. The sCMOS APS (Active Pixel Sensor) is structured with 4-5 transistors surrounding each active pixel; there are two gain channels; and the pixels are multiplexed to 16 or more parallel Analog-to-Digital Converters (ADCs) (that's why they are so fast - most CCDs have 1 or 2 ADCs, 4 at best). Each gain channel (High Gain, Low Gain) is run to a different set of ADCs and digitized in parallel. Each pixel's voltage is impacted by dark current (similar to the CCD, fairly linear with time), and then there is the gain channel characteristics, and then the unique ADC path through the amplifiers to the ADC. Further, as it is an active device, the on-chip logic gates including the multiplexers, ADCs, clocks and control logic generate heat or LED-like glow and this shows up at the edges of the chip where the logic resides. So the trivial computation used to cheat in CCD calibration doesn't apply in the CMOS APS world. Do your darks at the same length as the lights, and same temperature. This is what most pro CCDers have done for years, instead of scaling darks. Also, Bias frames don't seem to be too useful, unlike CCDs, because of the device differences. We'd need to have details on your setup and sample images - FITS, unprocessed, binned 1x1, light frames. e.g. what light source do you have? Is it LED or electroluminescent? You could try a sky flat/t-shirt flat and compare. You could be getting AC ripple (non-continuous light) affecting the exposure, depending on the panel technology. I suggest you start with High Gain images, 12-bit, and keep it under saturation (3600). Try 25% of that, and see how it goes. The sensor is ridiculous sensitive (high QE), and you could be experiencing effects from the panel. If you could send samples in FITS format, that would help. Feel free to send to chaig (at) diffractionlimited (dot) com via DropBox or WeTransfer. I'll get to your second post in a moment.
I'm not quite following you. In standard High Gain operation, you're getting a 12-bit ADC image, for the exposure length you specify. The sensor typically saturates around 3600-4095 ADU, on the 12-bit (0-4095) ADU scale. That's equivalent to 3600 x (2^4 conversion) = 3600 x 16 to convert to 16-bit ADU = 57600 out of 65535 on a typical CCD. As you shoot longer exposures, the pixels are picking up photons from your target (stars), as well as there is shot noise and the "logic glow". At a certain point (say 300 seconds ) the logic glow will likely exceed the dark current and shot noise. That's the practical limit of a single exposure. Using High Gain Stack Pro will take sub-exposures and combine them in-camera. The default I recommend to start with is a 15 second sub-exposure time. This is done in the menu of camera properties. Keep in mind that the dark's don't scale. This is critically important. The glow is non-linear. Temperature and time are not the sole drivers of pixel brightness; the thermal output of the on-chip digitization logic is consequential. Small subframes (smaller regions of interest) near the centre of the detector do not suffer as much glow as near the edges, as it is further away from the logic - but it still has the inherent dark current and APS transistors near each pixel.
The other thing to keep in mind here is that short frames combined off-camera take a ton of disk space. It is far more efficient to do in-camera.
thanks, but I mangled my question. My question is, how many HSP darks of 16 stacked subexposures do I need to make a decent master dark. Normally, I would take thirty or so darks to make a master. But a single HSP dark is composed of 16 images. So, would two HSP darks not be the functional equivalent of thirty-two single individual unstacked frames?
I would consider the individual HG StackPro images as being single images. A single 16-stack CMOS image has about the same read noise as a very good CCD detector. So why wouldn't we stack them? From a dark current perspective, they are also similar. Noise in a dark frame is proportional to the square root of the number of thermal electrons. Again, stacking averages that out. Finally, there will be cosmic ray strikes on the detector during an exposure (light or dark). You want to remove them via sigma clip or SD Mask. So from my point of view, you should handle the HG StackPro images exactly the same way you'd handle single CCD frames.
We are struggling with this CMOS system, which to recap, is a multi-user robotic setup primarily focused on photometry but also imaging. The CMOS camera and the use of a camera rotator have complicated the process. To solve the inability to scale darks, we have created a library of darks that users can use to select exposures. To get flats that will work with existing darks, we have modified ACP’s autoflat program so that it takes all the flats at varying panel intensities so that all the flats are the same duration to match one of the library darks. How we are going to be able to get enough flats made each morning prior to dawn is a challenge, even if we don’t attempt to generate them for all the filters. We are thinking about writing a script that would combine flats made over several days. Any advise on this is welcome. But, where we are really stumped is Maxim does not distinguish between flats at different camera angles, as I understand it. So while ACP will happily make zero and one-eighty flats, Maxim just stacks them when making a master, and even if we manually constructed 0 and 180 master flats, it seems that Maxim would not know which to use in an autocalibration process. The latter is a real problem. Surely we are not the only users to employ a camera rotator on an equatorial mount using autocalibration. Ideas?
Hi Colin, & James. Currently, ACP writes a fits keyword to the image header "ROT_PA" and writes the mechanical angle of the rotator as the value. Because the mechanical angle of the rotator should be identical for both flats and lights this value could be used by MaxIm to group flats and lights into matching calibration sets, rounding the ROT_PA to the nearest whole number to accommodate rotators that report the mechanical angle to several decimal places, which is IMO not really necessary for calibrating with rotated flats where you are largely correcting for broad offset vignetting patterns. For example, my PrimaLuceLab ARCO rotator under ACP control records ROT_PA to two decimal places e.g, 175.67 (degrees), but you would be hard pressed to detect any calibration errors if the flat was taken with a mechanical angle of 176 degrees or 175 degrees, always assuming of course that the rotator is ahead of the filter wheel in the image path. By grouping flats and lights using the keyword ROT_PA to the nearest whole number you avoid the issue where flats and lights taken over multiple sessions might have fractional differences in the recorded ROT_PA angle leading to flats being rejected because of e.g, an .01 degree difference in mechanical angle, which would be pointless, but that's just my opinion, I guess that some science imagers might disagree with that stance. FWIW, as a work around I've been using AstroPy in a Conda environment to batch duplicate my fits files, read the ROT_PA angle in each file, round the ROT_PA angle to the nearest whole number and append that to the fits keyword value for filter. My fits headers for the lights and flats then have a modified filter keyword that look something like e.g, "Red_180", Red_0, or, "Blue_90", Blue_270", for lights and flats taken either side of the pole, and with these modified filter keywords most post processing apps can sort and match flats to lights according to the mechanical rotator angle. Not a very elegant solution but it works for me until an official solution is found. William.
Maxim would be able to ID flats taken at different angles as it does different times and using matching flats of the same angle to make masters for that angle, and then use those appropriately in the autocalibration process for images having the same angle as the master flat. If it could just do that at 0 and 180, it would be a huge step. Ideally, the angle would not matter.
Bruce, William, this is useful discussion and influences our thinking. When I think about flats, here's what comes to mind: - vignetting - dust and other contaminants or imperfections - on the camera, the filters, and the OTA + Reducer/Flattener - mirror flop or gravitational effects on the OTA, depending on orientation of OTA (eg meridian flip, high altitude vs low altitude, extremes of the axes) - mechanical rotation angle of camera on OTA So we're just talking about being able to flat against a mechanical rotation angle of the camera, right?
Yes, the rotation angle, but that incorporates many of the imperfections and vignetting. In our case, with the filter wheel behind the rotator, the worst of the "imperfections" are not impacted by rotation. That does not mean there are none coming from the OTA.
It's been on our to-do list for a while to add a rotator angle capability to the MaxIm DL calibration tools. Obviously we have to be careful because this is such an important command in the software; validation is a major concern. Right now we're working on the 64-bit release, which is a huge project. We won't be able to start a new project until that is complete. My suggestion would be to manually create flat groups with a series of angles that you expect to use. Then you can manually enable/disable the groups with the check box in the Set Calibration command. That would provide a workflow until we can add the rotation angle feature.
Hello Doug, James. Just following up on your reply from yesterday. I don't know the focal length of James's system but with the large sensors available with modern cameras these days you can easily rotate for display purposes in post processing and the only remaining reason for using a physical rotator is to place a suitable guide star on the sensor of an OAG guide camera, which becomes more and more difficult with longer focal length systems, therefore a set of standard angles for the rotator is not really possible IMO, many science targets, at long focal lengths, will require a specific mechanical rotation angle. The reasons that I installed a rotator were: 1: The difficulty of placing a bright guide star on a fixed angle OAG guide camera when imaging for exoplanet transits in very guide-star-sparse areas of the sky and still including suitable reference stars in the same FOV (APS-C sensor at 2600mm FL). 2: When using a spectrometer it is necessary to rotate the slit so that close doubles with large magnitude differences can be separated by ninety degrees relative to the slit angle. In retrospect, for photometry of exoplanet transits I would have found it much easier for both acquisition and post processing to use an ONAG on-axis guider and not bothered with a physical rotator at all, which might also be a simpler solution for James's problem. While an ONAG would not solve the problem of slit angle when using the spectrometer the manufacturer now offers an upgraded slit holder with motorised rotation and I will most likely take that route, together with an ONAG, and drop the physical rotator entirely. For regular imaging I don't know how suitable these ONAG units are, I notice that the manufacturer gives no specification for the wavefront error of the semi-transparent beam splitter: https://www.innovationsforesight.com/products/full-frame-on-axis-guider-onag-xm-lambda-unit-only/ William.