CCD Misc. Topics --Page 2
Subject: Anti-blooming Considerations --Part 1
From: Michael Hart Your decision as to what type imaging chip you will choose should be based on YOUR primary use of the camera. My original posts (see following) on this subject was in response to questions on anti-blooming gate implementation (ABG) which has minimal impact in many CCD chips but significant impact as implemented in the Kodak 0400/1600 chips. If you plan significant tri-color imaging, the chip selection is even more important due to the low Q/E (quantum efficiency) of the 0400/1600 in the blue, requiring a 3X blue exposure for reasonably balanced colors. The ABG gate further decreases Q/E making tri-color arguably impractical for all but the brightest objects. For serious tri-color, the Apogee with the 512 X 512 back illuminated SITe chip is worth serious consideration. Of course, at $10,000, you'll need deep pockets and a computer with a spare card slot. Gary Campbell wrote:
This conclusion may be based on the assumption that an "anti-blooming camera" must have a CCD chip with an ABG (anti-blooming gate). Meade states in their catalog on page 68, "All Meade CCD systems have built in blooming correction, reducing the probability of streaking in the image". PictorView 2.0 software will automatically determine exposure and will automatically take all exposures short of blooming, subtract a dark frame in memory from each exposure and add them together into one final image. Since blooming is prevented in the final image, you have anti-blooming, arguably an "anti-blooming camera". The price is the same for a KAF-0400 chip with or without ABG.
There is no evidence that Meade is trying to re-define any accepted definitions of anti-blooming gate chips to pass off non ABG chips as ABG equipped chips. Merely, Meade provides a different method to control blooming that is automated. Non ABG equipped chips are the rule in large scientific arrays. The Kodak KAF-1000, KAF-4200, SITe SIA502AB, SITe SIA003AB, Thompson THX7899M and Hubble CCD imaging chips that come to mind as not having an anti-blooming gate option. This is not to say that the everyone should replace their ABG chips with non-ABG chips. I suspect the desire for many amateurs to produce "pretty" pictures drives the demand for ABG chips as an effective and simple way to control blooming at the expense of reducing the already poor quantum efficiency of the KAF-0400/1600 chips significantly. For tri-color imaging, the optimal exposure times approach those of gas hypered color film, so this should be considered. The ST-7 and 416 I own both are without ABG chips.
Yes, for the Kodak KAF0400 and KAF1600, add 40% more time to get similar signal/noise ratios as the same chips without ABG (anti-blooming gate). In addition, full well capacity is reduced from 85,000 electrons to 45,000. Also don't forget to add 40% exposure time to the already long blue exposure during tri-color imaging and 40% more time for dark frames. Finally, remember the ABG chip becomes non-linear above 40,000 electrons limiting photometry studies.
Meade did not use the term stacking images as a synonym for antiblooming, nor did I. The term stacking was used as part of an explanation of a fairly complex method of using software to control blooming.
In the above example, I have imaged M-101 for 30 minutes and was still way below the 85,000 full well capacity of the my unbinned ST-7. Rarely, is blooming so much a problem that I am forced to take so short an image that noise is a dominant factor in stacked images.
This is a good rule of thumb, but exceptions are frequent, such as minimizing thermal noise remaining after subtracting the thermal image as a result of random variation in the signal due to statistical fluctuations in the number of electrons from each pixel. Another reason for stacking exposures would be in double star imaging.
This is a good example, but practically speaking, rarely applies to images I have taken to reduce or eliminate blooming. Rarely do I need to have individual exposures less than 10 minutes for unbinned images. The purpose of my original post was to provide an answer to a previously unanswered question on a Meade forum about the Subject "Anti-Blooming setting" in PV 6.1. The merits of ABG or non-ABG Kodak KAF-0400 chips and any Meade conspiracies is debatable as a possible important issue for a prospective CCD buyer, but a mute point for a current Pictor 416/1616 owner interested in learning about his new camera. Gary Campbell wrote:
But, your conclusions about blooming control in a Pictor 416 were based on my previous statements about a product you don't own, the Pictor 416. This resulted in an allegation of misrepresentation by Meade, "Which means it isn't an anti-blooming camera at all, contrary to what Meade says". Good intentions aside for a moment, what purpose is served by making a new owner anguish over their product of choice at the moment they trying to get results?
Not exactly.... My earlier post statement was based on using some sort of blooming control to control blooming on a Kodak KAF-0400 chip, my statement that the signal/noise ratio remains about the same is sound and very accurate based on over 500 hours of experience with the Pictor 416 camera and standard accepted CCD imaging techniques. An advantage of one long exposure over a 2-4 added frames is the reduction of readout noise from the amplifier. Thermal and background noise is greatly reduced but readout noise, although very small is added. Until the composite noise level of a stacked image exceeds the 16% noise that is added as a result of adding the ABG, the stacked image has a better signal/noise ratio, period. So, taking a few exposures of relatively long length and adding them together can be LESS noisy than one single exposure on a the same noisier chip with the ABG. If most exposures are going to be very bright M-42 like objects with fainter details, the increased number of exposures required to minimize or prevent blooming will result in a noisier composite images than the single exposure. William Sommerwerck wrote:
Often in CCD discussions, various types of "noise" are mentioned. We know background noise is reduced by minimizing light pollution sources, imaging from a dark site and image processing techniques. We are used to hearing thermal noise minimized by dark frame subtraction and sensitivity noise minimized by flat frame division, but there are others less commonly mentioned primarily because we cannot improve them in a given system- sensor noise, amplifier noise, readout noise and A/D converter noise. On a Kodak-0400/1600 chip, the sensor noise is minimized by longer exposures to enable the signal to grow faster than the noise, thus increasing the S/N ratio with longer exposures. Another way to minimize the sensor noise is to use a less noisier version of the same chip, the non-ABG version.
Yes, the primary reason to use an anti-blooming gate (ABG) chip is it's ability to bleed off excess electrons BEFORE they spill into adjacent well sites after reaching full well capacity. For the Kodak KAF-0400/1600 series, this starts at 40,000 electrons per pixel and peaks at full well capacity of 45,000 electrons per pixel. Hardware anti-blooming allows one exposure (usually) that if exposed long enough, will have a good signal/noise ratio (S/N). The result are images that usually show little to no blooming effects. This may be useful in separating faint components near bright stars. If most of your images are of M-42 type objects, the ABG Kodak KAF-0400/1600 might be right for you. However, there is a cost for controlling excess electrons with an ABG chip, which is reduction in sensitivity and increase in noise and reduction of full well capacity from 85,000 electrons per pixel (e-) to 45,000 for the Kodak KAF-0400/1600 series. The lower well capacity decreases dynamic range considerably which means objects with a large range of brightness will not be recorded as accurately- the moon, Saturn and M-31 come to mind. Still, most objects don't utilize the full dynamic range of even the 45,000 e- /pixel of the ABG chip. At this point the same chip with ABG is noisier than the non-ABG version of the same chip. This allows one to add images short of blooming in the non-ABG chip and maintain a better S/N ratio and consequently a better final image until the noise of the amplifier + A/D converter + readout sums from each exposure exceeds the sensor noise. Since the sensor noise for the Kodak KAF-0400 ABG chip is quite high (16%) and the sums of the amplifier, A/D converter and readout per image are quite low, one can add several images together up to around 12 before similar S/N ratios are reached. If you have camera control software that does this automatically, this process is pretty painless.
The sensitivity loss is primarily due to the added physical antiblooming gate (ABG) structure that literally overlaps the individual pixels. In the Kodak KAF-0400/1600 chips, the ABG takes up 2.7 microns on one side of each 9 X 9 micron pixel, so the pixel size is now 9 X 6.3 microns. The ABG structure increases noise in the chip itself (sensor noise) by 16% which now becomes a dominant form of noise, so you must increase exposure by at least 40% to get back a similar signal/noise ratio (S/N) as the same chip type without ABG. As you approach full well (which is reduced from 85,000 electrons (e-) per pixel to 45,000 electrons per pixel with the ABG addition), the sensor noise increases dramatically at those pixels because the ABG starts bleeding electrons at about 40,000 electrons per pixel. Now, that part of the chip is said to have become non-linear- not a desired effect when doing photometry work. In addition, even if the object your imaging does not exceed the full well capacity of your chip for a good S/N ratio, the ABG structure will still be there, increasing noise and necessary exposure times. For example, an excellent unbinned B & W image of M-51 with a excellent S/N ratio can be done at f/6.3 on a non ABG KAF-0400 in 22 minutes. For the ABG chip, the exposure needed is at least 31 minutes. For tri-color imaging with the Kodak KAF-0400/1600 chips, a 22 minute B&W image of M-51 on the non-ABG now requires at least 31 minutes for the same S/N ratio. For tri-color, 110 minutes (the blue requires almost a 3X exposure). This increases to at least 155 minutes with the ABG version chip. What if you want to take dark frames? Are you sure? Now we're talking some serious time- over 5 hours for one image, before processing! While the tri-color image exposures above give a good color balance to compensate for the poor blue response of the CCD chip, they DO NOT achieve the same S/N ratio due to light losses through the filters. In fact, it is not practical to get a good unbinned S/N ratio of 25 with tri-color imaging. With 2 X 2 binning, you may be able to get close in 3-1/2 hours for the non-ABG and over 4-1/2 hours for the ABG chip. Now, the high quantum efficiency back illuminated CCD chips start looking good, except the price, of coarse. Still, in 155 minutes, I can get a couple of pretty good images of M-51 on gas hypered color film that can be double-stacked to reduce the grain.
Yes, because "flipped over and ground down" (back illuminated and thinned) chips bypass the column isolation structure and the charge transfer gate structure that is deposited on top of the silicon which dramatically improves quantum efficiency to over 80% versus 40% for a KAF-400 chip without ABG (anti-blooming gate) or 28% for the Kodak KAF-0400 with ABG. The downside is the necessary thinning of the film is difficult to do well, so a lot of chips are scrapped, vastly increasing price (over 10 X). Subject: Anti-Blooming Considerations --part 2: Photometry and Astrometry
From: Michael Hart
As in other camcorder chips, the Sony monochrome chip is an interline transfer device. Every other column of pixels is used as a transfer register and doesn't collect light. The Sony chip uses a layer of tiny lenses on the chip surface to divert some of the light that would fall on the non-active transfer columns to active photosites. For accurate photometry calculations, we must determine the total amount of light spread over many pixels from a point source. Any CCD chip structure that doesn't record or incorrectly records actual photons increases uncertainty. This results in a distorted photometric calculation. The Sony color chip with its cyan, magenta and yellow-green filter matrix further adds to uncertainty because additional photometric filters are typically used to isolateU, B and V bandpasses with a monochrome CCD imaging chip or photomultiplier tube. Certain conventional CCD chips such as the Kodak 0400 and 1600 have ABG (anti-blooming gate) structures that physically overlap the pixel site. The Kodak 0400/1600 is one of the worst at 30%. This amount of overlap, along with nonlinear response of the ABG equipped Kodak chip when approaching full-well capacity, will significantly increase the uncertainty and hence accuracy of measurements. The best photometric measurements are made with non-ABG equipped Kodak 0400/1600 chips on scopes with sufficient focal lengths to accurately sample the point source. Shorter focal lengths that undersample the point source will produce less accurate measurements. The Sony interline device adds further uncertainty with the transfer columns approaching 50% of the pixel site area. This results in additional uncertainties over the Kodak chips, even at very low well fills. One can compensate somewhat by using a long focal length telescope to oversample the point source, decreasing uncertainty. For astrometry, we must accurately determine the center of distribution. In this case, the transfer columns again significantly increase uncertainty. This is not to say casual photometry or astrometry are impossible on Kodak ABG 0400/1600 chips and the Sony chip, especially if longer focal lengths are used. Still, with a Sony interline imaging chip, the results would arguably have limited scientific value. Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 1 of 7
From: Richard Bennion <rbennion
Christian: I am going to attempt to take on this question, but only from a CCD point of view. Although the basic concepts apply to eyepiece magnification as well. It is very important that you match the proper CCD camera to your telescope configuration. Let me give some examples first of all and then some more explanation. *********** Summary: Large focus zone, but image scale is well below seeing
8" SCT @ f/6.3 matched up with a SBIG ST2000XM Summary: Harder to find exact focus but because you are never
going to 8" SCT @ f/10 matched up with a SBIG ST9XE Summary: You would get a smaller total image size, but you could
image safely on most nights as the 2.06 arc second image scale. You
could also get some great close up views of many deep sky objects, M57,
M13, remote galaxies. TV101 @ f/5.4 matched up with a SBIG ST2000XM Summary: Even harder to find exact focus but because of the
2.8 arc second image scale, you can almost image on any night regardless
of seeing. The large FOV is ideal for shots of parts of the Veil, Lagoon
Nebula, North American Nebula. Mosaics of M31 etc. Taking images of
smaller objects would be at a poor overall scale. Let me summarize all of this data: Matching up your scope with the right camera (or the other way around) is essential for good image acquisition. The same can be said for matching eyepieces up with a telescope. You are never going to get 400x magnification on a night of bad seeing with an 8" f/10 scope. But stick to 75mm and everything will look fine. But go to a remote dark sky site with a night of 1.0 seeing, and you can bump up to 300-400X and see breath taking views. As an astrophotographer, I have multiple scopes and multiple cameras so that I can put together configurations for different targets and different seeing conditions. No need to image M51 on a bad night of seeing with the 8". But pull out the 4" refractor and I can get a wonderful image of the Lagoon Nebula even in bad seeing. I suggest you download Ron Wodaski's CCD calculator to try different
combinations of scopes and CCDs to see what works best for you. You
can download this valuable tool (which he so kindly makes available
for free) at: I hope this helps you understand some of the issues with scope/camera/eyepiece matching based on seeing conditions. ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 2From: Mark de Regt <deregt I believe that the critical focus zone is narrower at faster focal ratios; however, I also agree that it is difficult to see that, probably for the reason that you mention. An f/10 scope allows for higher resolution imaging, if you can guide that well. I started, with my 10" LX200, at f/4.6, since the mount has the wobblies and that hides them well. However, when I got an AO-7, I could image effectively at f/10 (actually, with all that stuff hanging off the back of the scope, and with the queer geometry of the SCT, I image at f/12). I love being able to do high resolution imaging, so I wouldn't have it any other way. When I need a larger FOV, I use the .63 reducer, and I image at f/6.3, which is almost four times the FOV (twice the linear dimensions). ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 3
From: Richard Bennion Mark: I agree. By adding an AO-7 you add a precision of guiding not normally available to built in guiders or even separate guide scopes like I have. BTW - I still need to get me one of those... ;) I still believe an 8" SCT at f/10 without an AO-7 with an unbinned image scale of .76 arc seconds is a not a good combo to image with as I have tried it many times. Change the bin mode to 2x2 and you are fine. I did not want to add confusion by adding binning into the equation as everyone normally wants to take full advantage of the max resolution of their camera. But you are correct, by stepping down the binning to 2x2 or even 3x3 your arc second per pixel scale improves dramatically at the expense of smaller images. I still hope the post provides some guidance and wisdom to matching camera's with CCDs. This issue could be elaborated on in an entire book. ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 4From: Peter Erdman <erdmanp I believe that this is also wrong. I have an 8" f/10 system and therefore image at the aforementioned ~0.7 arcsec/pixel. When I bin at 2x2 I can clearly see that the quality of the image is much cruder, and my average seeing is about 2-3 arcseconds as previously mentioned. The difficultly of imaging at the full potential of the optics should not be confused with the difficultly of imaging at the potential of the mount. Don't mix the problems of the mount with the excuse of "seeing." One pixel per "seeing" resolution element is an abysmal measure of image quality and a guarantee of poor images. ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 5From: Mark de Regt Your original post was very useful; all I wanted to do was add the thought that, with the SBIG dual-chip cameras, the AO-7 really changes the traditional "formulas" for what camera would work with which scope. I certainly agree that there is no way an LX200 could effectively image at .76 arcseconds per pixel without the AO-7. ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 6
From: Gene Horr <genehorr The focal "plane" thickness is determined by f-ratio, although optical design, seeing, and detector size can distort the pictures slightly. Therefor f/10 is more forgiving. But that forgiveness can make focusing more difficult if using your eye to focus as there a relatively large region where the image all looks the same. That's where tools like focusing microscopes, diffraction spikes, and FWMH calculators come in handy. At f/6.3 you can see more of a change for the same amount of focus movement, making it easier in some respects. The problem is that if you are right on inner edge of the "plane" the contraction of the tube from cooling will have a greater effect at the shorter f/ratios. Regarding the usefulness of a f/10 scope it is because this is primarily a visual instrument. In general for the same optical design in an obstructed instrument the "faster" the system the larger the obstruction. The larger the obstruction the lower the contrast. F/10 gives you a good balance between a reasonable sized secondary versus not too long a focal length. ---------------------------------------------- Subject: Depth of Focus at F/10, F/6.3, F/3.3 --part 7 of 7From: Mark de Regt You state that the ST-2000 with an 8" SCT at f/10 is a "bad combination," because the image scale is "well below seeing capabilities of scope and average seeing conditions." I don't really know what that last statement means; however, I disagree with the conclusion, given the availability of the AO-7 at a reasonable price. I image at .61 arcseconds per pixel with my 10" LX200 using an ST-8XE, with the aid of an AO-7. On a night of decent (not even great) seeing, I have no trouble getting good (I think) results at that image scale; if seeing is particularly bad, I can always either bin, or resample the image down to a smaller image scale, or use a focal reducer. Likewise, with the combination you pan, adding an AO-7 would allow very good results at .76 arcseconds per pixel, and the operator can easily bin or resample, or use a reducer, on off nights. I am grateful that I ignored the advice to get an ST-9E when I was getting into imaging a couple of years ago. A camera with large pixels does give better quantum efficiency (typically), and is very forgiving of guiding problems, but it is very limited in its ability to do anything like high resolution imaging; also, while the field of view is pretty large, the large pixels makes the image quite small. All the gurus frequently image at significantly below 1 arcsecond per pixel, with spectacular results. It demands long exposures and careful guiding, but it is certainly not a bad idea. Subject: Optimizing a CCD Imaging System-
Pixels and Focal Length
From: Michael Hart
I believe you do not see posts on this subject because it is difficult to explain in a reasonable amount of words and/or not completely understood. Matching pixels to an optical system- here is the essence of what is going on. Essentially we are optimizing CCD resolution to signal/noise ratio. I believe it is best to a have a mental picture of the concepts to practically apply the mathematics. Consider this: Your 12" F/10 scope has fairly long focal length, 3048 mm (120"). This makes a fairly large image of say, M-42- the Orion Nebula, on film or CCD chip. In fact, M-42 is too large for the largest CCD chips. Add a common F/6.3 SCT (Schmidt Cassegrain Telescope) focal reducer which will reduce the focal length to 1920 mm (75.6"). (Actually, the moving SCT mirror means the actual focal length may be more or less than calculated, depending on mirror position). Now, the image of M-42 is smaller, so it concentrates more light on a fixed photo-site pixel size, which significantly lowers exposure time (by about 2-1/2 times) at the expense of image size (small image scale). Visualize an even smaller projected image of the Orion Nebula, but brighter. Now, consider that because stars are so far away, it is not possible to resolve a disk. For all practical purposes, stars can be considered as points of light (except the sun) UNTIL the star light reaches the earth's atmosphere and is spread out. If we are lucky, that point of light is enlarged to less than 3 to 3-1/2 arcseconds. The amount of enlargement is known as the point source spread or usually called point spread. Then, as we pass the light through mirrors and lenses, the point spread is increased a little more. So, the most EFFICIENT way (and highest signal/noise ratio) to record that star under typical observing conditions is to have a pixel size that spans 2 pixels per star (based on Nyquist sampling criterion) or about 1-1/2 to 2 arcseconds for the point spread of 3 to 3-1/2 arc seconds. Harry Nyquist was a mathematician working at AT&T who found that if enough samples are taken, the original signal can be recovered without any loss. To do this, samples must be no larger than 1/2 the size of the finest detail in the signal. With imaging, if this sampling criterion is not met, the image will be subject to distortion called aliasing, which cannot be filtered out by processing resulting in permanently lost information. Too long a focal length, (large image scale) and the star is oversampled, with an increase in exposure time needed, reducing your detectors ability to record faint stars because light is spread over several pixels. Too short (small image scale) and the star is undersampled, resulting in too few pixels to accurately measure total star brightness (useful in photometry) or the exact centroid (useful in astrometry and some image processing). To calculate the approximate amount of sky that your ST-6 will "see" per pixel in arcseconds, divide your ST-6 pixel size in microns (23 X 27) or an average of 24.9 by your 12" telescopes focal which is just about right. Reduce the focal length with a F/6.3 focal reducer to 1920 mm which results in 2.67 arcseconds per pixel, which is by conventional wisdom, a little undersampled, but quite nice for illustrative imaging. By the way, a stars size in the image is determined by the full width measured at the half maximum point of the intensity profile. I often use 1.15 arcseconds per pixel which is oversampled a bit for 3 to 3-1/2 arc second nights, but necessary for my work, especially on better nights. But, the most efficient way to record a star doesn't mean the resulting image looks good when enlarged. Even if you resample the image to smooth pixels, there is one very good reason NOT to follow conventional wisdom- mathematics. You cannot add information that is not there by resampling to smooth out irregularities (anti-aliasing). You can use mathematics to increase the amount of detail or apparent resolution. Fast Fourier transforms and deconvolution work best with over-sampled images and 14 bit or greater brightness levels. For resolvable objects, such as Jupiter, you loose resolution with an undersampled image. Oversampling may increase exposures to the point that you record atmosphere fluctuations (similar to blurring that is produced with a regular camera when moved). Adjust your eyepiece projection or Barlows used on Jupiter to create a focal ratio that results in image size of 150-200 pixels. A 3X barlow or two 2X Barlows will get you to about 0.15 to 0.20 arcseconds per pixel, probably in the range your looking for because even though the Dawes resolution limit for a 12" scope is 0.4", Dawes limit really applies to double stars, so you can resolve lunar details below this to 0.2" or better. Subject: Hartmann Mask Focus Aid URLFrom: Richard Robinson See my Hartmann Mask Focus Aid at: <http://rao.150m.com/Focusaid.html> Subject: Hartmann Mask Focus Construction
--part 1 of 2
From: Radu Corlan <rcorlan
I get excellent results with a Hartmann mask with triangular openings and focusing of diffraction spikes. The repeatability is great: at f/10 and 3m of focal length the focus position is within 0.05mm every time I try. It also doesn't depend on seeing. The problem with focusing software is that it takes images at different positions through focus and then has to be able to return to exactly the right spot - which of course depends a lot on the quality of the focuser. Also, at long FL and bad seeing, extracting the best position is tricky, especially with short exposure. With the diffraction/Hartmann method, you don't have to go past the best focus and the back; you know you are focused when the spikes align. The mask is easy to make: you make one triangular opening on one side of the aperture, and a second one on the other side, making sure that the sides of one triangle are perpendicular to the ones of the other triangle. You can make the openings as large as your secondary allows. Point at a bright star, and as you near focus, you will see two 6-spoked stars. As you get nearer, the spoked stars will overlap; when you have one symmetrical 12-spoked star, you're focused. Even the slightest change in focus will move the spokes of one star from the other. All you need is the mask and being able to see images from your camera. ----------------------------------
Figure two circles that are tangent to the secondary holder and the inner edge of the aperture. for a 10'' the circles will be a little larger than 3'' each. they would be placed one at 12 o'clock and one at 6 o'clock. The two equilateral triangles will be inscribed in these circles.
Top triangle will have vertexes at 4, 8 and 12; bottom triangle at 3, 7, and 11. If you rotate the top triangle 90 degrees, it will overlap the bottom triangle.
The triangle would fit in a 3'' circle. -------------------------------------------------- Subject: Hartmann Mask Focus Construction --part 2 of 2From: John Mahony <jmmahony Each side of each triangle will produce a diffraction spike, actually two spikes out of the star in opposite directions, so it's like a line through the star. So a single triangle will produce 3 intersecting lines for a 6-spoked effect. You can make the two triangles with one rotated 30 relative to the other (this is actually the same as Radu's 90), and when the two stars merge, you get a 12-pointed star. Alternatively, you can make the two triangles with the same orientation, then as the two 6-pointed stars merge together, the corresponding spokes are parallel and the two images merge to make one 6-pointed star. Some like that because you can see the pairs of spokes coming together. If the focus is just a hair off, the spokes perpendicular to the direction between the two triangles will be slightly wider than the others. Hartman masks are easy to make so you can make both kinds and see which works better for you. If you still want to try a software solution, Astrosnap <http://astrosnap.free.fr/index_uk.html> webcam freeware will handles just about everything that can be done with a webcam, including autofocus if you have an LX200 with the 1206 focuser. Another description of buiilding a Hartmann Mask is here. Subject: Cable Pin-Out for ST-4 to LX200
From: Gary Campbell & Nigel Puttick <Nigel_Puttick
Here is how mine is wired. I've found the numbering scheme to be confusing on the modular connectors, so I'll try a different approach. If you'll hold the modular connector cable in your right hand, and face the connecting wires up, it should look like the following (I hope it looks readable on your end): |-------------------blue(A) > |-------------------yellow(B) >_______/ /_____________ |-------------------green(C) / / Main cable |-------------------red(D) _______/ /______________ |-------------------black(E) > |-------------------white(F) > Colors might be different, so I'll focus on the letter designation instead. (A) goes to pin 10 (B) goes to pin 13 (C) goes to pin 7 (D) goes to pin 4 (E) goes to pin 11, 5, 14, 8** (F) goes to nothing ** On the (E) lead This corresponds exactly to the wiring that Philip Perkins described to me over the phone last night, except his flat cable is the other way round in terms of colour. It also makes sense of the pin-outs described in the SBIG manual. My cable on the other hand has these connections completely reversed, so it is just as well it has the wrong D plug (male) on the ST-4 end, or I may have had disastrous results! I will now be able to rebuild it correctly. It should also be noted that according to both Gary's and Philip's (both successfully working) wiring diagrams, the Meade manual on p. 29 is showing the BACK (i.e. cable exit side) view of the plug, NOT the front. Meade really should have made this clear! Check out the ground connection (commoned pins 11,5,14,8) which should go to pin 4 on the modular plug: this pin is second from left in Gary's diagram and so the diagram of the plug must be showing a view from the rear, with pin 4 second from left. Subject: Pictor 416XT & ST-7 Cool-Down
and Warm-Up Considerations
From: Michael Hart Several months ago, a thread started on MAPUG concerning the lack of a progressive software cool-down of the Pictor 416/ST-7 and warm-up of the Pictor 416 Kodak 0400 imaging chip. That thread never reached complete consensus. Since then, I have been asked about this subject. BACKGROUND: Some CCD chips may be sensitive to sudden temperature changes that could cause chip damage. Thermoelectric coolers (TEC's) themselves may well be the primary source of those sudden chip temperature changes. What applies to some CCD chips may apply to a greater or lessor degree to other CCD chips, specifically, the Kodak 0400 chip as used in the Pictor 416XT and the ST-7. Jon Brewster was concerned that Pictor 6.X software lacked warm-up control on shut-down that was provided with the same software for the Pictor 2XX series cameras. Paul Laughton wrote to Jon Brewster that Kodak recommends that the chip not be heated or cooled at a rate of more than one degree C per minute." If Paul was right about the 0400 chip, we would have to allow 30-40 minutes upon start-up and another 30-40 minutes upon shutdown, manually increasing the setpoint 1 degree C. every minute since there is no popular CCD camera software that supports such small changes. This would be a significant hassle, unless truly warranted. Quite possibly, one could write a macro to do this, but your still looking at 60-80 minutes of cooling and warming time. WHY ST-7 OWNERS SHOULD CONSIDER PROGRESSIVE WARM-UP: Cooling on my ST-7 initially exceeds 13 degrees C. per minute. Even with software temperature regulation upon shutdown activated for my ST-7, I get initial temperature rises exceeding 11 degrees C. per minute on warm-up. According to SBIG, Kodak has denied several times that there are problems with rapid cool-down and warm-up with their camera software. Both the Pictor and ST-7 use 100% power during cool-down. The ST-7 may need warm-up regulation because upon shutdown, a single stage peltier rapidly dumps heat from the warm side into the cold side, much faster than during cool-down. In addition, the ST-7 contains the Texas Instruments TC-411 chip used for guiding that IS effected by sudden cool-down and warm-up. This chip is used in the ST-4 and ST-4X as well. WHY PICTOR 416XT OWNERS DO NOT NEED PROGRESSIVE WARMUP: The 2 stage peltier in the Pictor cameras have more thermal inertia as the warm side must transfer heat back through 2 stages, which slows with my observations of stacked peltiers. In addition, warm-up does not have to accommodate another chip cooled to the same temperature as the imaging chip, such as with my ST-7. John Hoot posted a similar conclusion about 416 warm-up. It was possible Paul Laughton's information was not based on current Kodak specifications, so I asked Paul for his Kodak contact whom I could call and discuss the issue with (his information was contrary to my indirectly related discussions with Kodak engineers prior to that). I promised to report back to MAPUG. It is also quite possible that certain designers at Kodak knew limitations that were not reported to Meade and SBIG by Kodak Sales and Marketing. Paul replied back privately that he would ask his source if it was OK. After several months, I followed up with a private note asking Paul, "Did you call your Kodak source & get his permission for me to talk to him/her about the 0400/1600 chip limitations?" Paul replied, "Yes. I called. No, he did not give permission...." CONCLUSIONS Based on the confirmable information at hand at that time and personal experience, I had personally concluded that software warm-up control was desirable for the ST-7 and not necessary for the 416XT. I did not post these conclusions. I now have new information that allows me to be comfortable with posting a definitive conclusion at this time. I have some very good news for everyone with the Pictor 416XT that use the Kodak 0400 imaging chip. I spoke directly with Kodak Scientific Imaging Systems engineering. They were quite surprised at my question about CCD chip cooling rates. According to them, Kodak has NO recommended cooling rate. In fact, I was informed that going from 26 degrees C. to -40 degrees C. (79 degrees F. to -40 degrees F.) in ONE SECOND was quite acceptable. This was confirmed by the Kodak Scientific Imaging Systems Chief Engineer who has used liquid nitrogen to achieve those fast rates. For those that may still have reservations, David Dixon has written me details of what I consider reasonably practical advice for the Pictor 416XT:
Subject: CCD Operation in Warm Temperatures
From: Gene Horr <genehorr Several people have written expressing worries about using CCD cameras in warm climates. I have worked with various CCD cameras in ambient temperatures from -5 C. to 30 C. My conclusion from both my own experience and from technical information supplied by various manufacturers is that while there are some problems with operating in warmer temperatures they are easily overcome. Unfortunately, there is some basis for your worries. The first problem is noise. The pictures taken within the cooler temperatures are better. But it is not a dramatic difference. You can still take good images in a hot ambient temperature. The biggest effect from the hotter temperature is that the dark current is higher. The purpose of the dark frame procedure is to remove this "noise". The dark frame subtraction does do a very good job in removing the noise. But since the noise level is higher with the hotter temperatures, more of the noise leaks through. But all is not lost! You can fight this with a higher signal/noise ratio (more on this later). The second problem you might run into is that since the dark current is filling up your CCD "well" faster, you have less "room" for the signal. In short, you are more limited in your exposure times. As with the higher noise level, this is a problem but it is not a show stopper. I have found that other factors such as sky glow and bright stars in the field bleeding usually limits the exposure before well depth becomes a worry. But just in case you were planning a 1 hour exposure you need to know that most likely it can't be done in warm/hot temperatures. Fortunately, there is a procedure that fights both of these problems at the same time. The procedure is stacking images. Since you are taking multiple short exposures you don't have to worry about "filling up" the CCD "well". And the main reason for stacking images is that it improves the signal/ noise ratio. Voila! Both problems solved at the same time! Subject: Field Testing CCD Camera Desiccant
From: Michael Hart CCD cameras often use thermal electric coolers (TEC's) to lower the imaging chip temperature which is very useful in reducing thermal noise and increasing maximum exposure times. Unfortunately, as you reduce the chip temperature, you encourage any moisture inside the camera to condense on the cold imaging chip. Cooled CCD cameras use a desiccant that has a propensity to absorb moisture and provide a dry atmosphere that minimizes the possibility of moisture condensing directly on the CCD chip or optical window. Some manufacturers state a useful desiccant life of a year or more. This leads to the misconception that the desiccant "goes bad" or "gets old" and must routinely be replaced. Since the Pictor 416/1616 cameras do not have user replaceable desiccant, owners of these cameras may assume these cameras need routine desiccant replacement as well. In fact, a well sealed camera capable of holding a partial vacuum will never need the desiccant replaced. If the seal is less than perfect, the desiccant will need to be recharged or replaced when saturation is reached. How do you know if your camera needs desiccant recharge or replacement?
Very likely, his desiccant is saturated with moisture. If your camera is sealed well enough to hold a partial vacuum, the desiccant has an indefinite life. The 416 face is o-ring sealed, so anyone that opens the camera must be extra careful to assure that seal is in place properly. In addition, you risk getting contaminants on the CCD chip, so a very clean and dry environment is desired. See attached S, Q & A (Statement, Question, and Answer) I put together from a couple of earlier posts: S: The 416 must be sent back to the factory for desiccant replacement. A: This is true for warranty purposes. If the camera is out of warranty, you do not have to send it back to the factory to recharge the desiccant, if you know how to recharge the desiccant in the camera. It is worth considering that Meade is set up to do this job properly. Q: When and/or how often and/or how do I know? S: Maybe never. Maybe once a week. It all depends on your local humidity conditions. A: This is partially correct. Local humidity would only effect desiccant life if the camera seal is incomplete. What happens is if the camera pulls in moisture when the camera is cooled, the desiccant must absorb this. The more frequently you cycle the camera through cooling and warming, the faster the desiccant absorption capabilities are depleted. The 416 has a quantity of desiccant packed around and behind the imaging chip. The ST-7, about 1/3 that in threaded tube removable from the back. Both cameras are capable of indefinite desiccant life providing both can hold a partial vacuum. I tested my 416 for this, it was fine. My ST-7 had some leakage where silicone is used for through camera connections and the TEC condenser. I sealed these and purged with dry nitrogen. My ST-7 has used the same desiccant cartridge for over 2-1/2 years. The dry nitrogen purge will allow -35 degrees C. chip temperatures without frosting, which I sometimes use, hence the modifications. S: One sign of saturated desiccant is random, jagged and pretty lines/shadows shaped like lightning bolts on your images. Another sign can be shutter problems. Look down onto the CCD and see if there is frost on it. To do this while at operating temperature, tell the camera to grab and image so that it will open the shutter for you. A: The 416 uses a dual shutter that is not particularly sensitive to moisture freezing at lower chip temperatures and has direct coupling. It is very likely that shutter will continue to function quite well regardless of desiccant saturation. The ST-7 uses a larger shutter directly coupled to a motor that can slow or stop if moisture freezes on the motor or coupling, so the statement above would apply to this camera. You can determine in the field if your desiccant has reached saturation with the following simple procedure: First, power up your camera as you would for a normal image, but set chip cooling to OFF. Allow the camera to warm up for half an hour or so. This will allow the camera interior to hold additional water vapor that may have condensed on other areas inside the camera. Take a light frame of sufficient duration to allow careful observation of the imaging chip. The imaging chip will be quite dark with possible metallic reflections. The optical window should be clear on the 416 except for the multi-coatings. The ST-7 optical window should be clear. Next, put the camera in the refrigerator with the head cable (parallel port and power cables for the ST-7) and set the camera chip temperature to -30 degrees C. Allow the camera time to reach or approach the setpoint. We want to condense moisture inside the camera on the imaging chip. Take another light frame and look closely at the imaging chip. How does it compare to the warm camera? Is the optical window clear? If you see moisture, your desiccant will need to be recharged or replaced and it is useful to check that all seals are intact and to dry out the camera to maximize desiccant life. Keep in mind that unless you have the capabilities for purging the camera, it may take an hour or so for the new or recharged desiccant to absorb the moisture, depending on where the desiccant is located. Subject: Dark Frames & Flat Fields
From: Ralph Pass <rppass Dark frames and flat fields correct two separate and distinct problems with CCD images. First, a dark frame is a correction to the raw image that accounts for the 'base' image that would be created for the same exposure and temperature setting but without light (hence the term dark). It consists of two distinct parts: the bias image and the thermal image. The bias image should be consistent from dark frame to dark frame and represents the minimum readout value for each pixel. The thermal frame is dependent on the time an image is exposed and the temperature of the chip. These two are summed to get a dark frame. Most CCDers take dark frames with an exposure equal to that of the raw image and subtract it from the raw image to create a 'light' only picture. Sophisticated users will take a series of bias images (Basically 0 length exposure dark frames) and average them to get a good bias frame. They will then take a long thermal exposure (take a long dark frame and subtract your good bias frame) and save that as a master thermal frame. They will then create a thermal frame for any exposure by scaling the master thermal. The correction is then to subtract both the scaled thermal and the good bias frames from your raw image to create a 'light' only picture. The difficulty with long thermal frames and the Kodak 400 and 1600 chips is that some of the pixels (known as population III pixels) will saturate very quickly. This does not allow for scaling to get the proper correction for these pixels from a master thermal. This is why I always shoot dark frames of the same length exposure as my raw images. Second, each pixel has a different response factor to light. In addition, dust in the optical train (particularly on the CCD chip or on the window to the CCD chip) and vignetting of the telescope cause variations in response to light that is pixel location dependent. Correction for these effects is through the use of a flat field image. In this case you create a 'light' only image that is the raw image through the same telescope with the same focus (and if your chip is offset from the optical axis, the same chip orientation) of a uniform ('flat') light source. Creation of this image is a source of much discussion. I use a sheet of white paper over the end of the telescope and point it at a diffuse light source. Once the image is obtained (and I get one for each evening), flat field correction consists of a pixel by pixel correction that is a multiplication by the median value of the flat field image and a division by the pixel value at the corresponding location from the flat field image. Flat field correction are required when you are doing photometry or when you are really stretching the image to show information. It is not needed, in general, for moon images, etc. Finally, the histogram for the raw image will change (most significantly in terms of the bias value) with a dark frame correction. Failing to adjust the histogram limits will result in a 'darker' image. Subject: Digital SLR (DSLR) Yahoo Group
From: Russell Croman <rcroman Check out the digital_astro Yahoo group. They also have a home page
here with many images: There are all brands of digital cameras used there, including the Canon 10D, which is a favorite. I've used the D60 (the predecessor to the 10D) quite a bit for lunar and solar (i.e., bright object) work. It's quite nice, and apparently the 10D improves on the noise even more. Of course there's still no beating a cooled, optimized CCD camera for long-exposure work, but the new digital SLRs are pretty good, especially for the money. Subject: New Yahoo Group : 3D-AstroPics
From: Sylvain Weiller <sweiller Anyone interested in stereo astronomy pictures can join this group. Our goals will be 3D aesthetic and/or scientific images, technical discussions and help for newbies entering the astro-3D-field. Members can share knowledge on technics, ask for beta-test of homemade softwares and most important, organize simultaneous imaging sessions with people overseas to capture the Moon (and Moon eclipses). Favorite 3D subject is obviously the solar system including far away planets (synthesizing 3D from time difference). Pictures made with digital cameras, webcams, camescopes, video-cameras, argentic scans and links on 3D astro images are welcomed. Programming efforts will be done to accommodate simultaneous pictures
made with different instruments (focal length, orientation...). As lot
of scientifically minded people do speak it, we will use english as
a common language, but other (mainly french) can be accepted. I invite
you to join it. Subject: CCD Color Imaging Concepts
From: R. A. Greiner and Michael Hart I am pleased to report that Michale Hart has completed the second part of his article which discusses concepts about doing CCD color imaging. It is located on my website under the topic Imaging/CCD Imagers and Accessories/ Subject: Image Processing Freeware URL
From: Dirk van den Herik Here's a source for good image processing software--Mac and PC. And
for free! Subject: Image Processing Software
From: Gary McKenzie <reiki I have been going through the process of evaluating software for my home made (Cookbook 245) CCD. I have had the chance to evaluate a couple of the packages you have mentioned. A Current user spent an evening last week guiding me through his copies of "CCDsoft" and "Maxim DL." I have decided to purchase other software for a number of reasons: Maxim DL has as its only claim to fame a very nice implementation of deconvolution. It is its only claim to fame. The stacking function is poor -- my friend uses CCDsoft to do this -- and this is only marginally better. To my eyes at least (and to many on the CCD list) deconvolution is not any better than unsharp masking. CCDsoft is an expensive inflexible package that looks pretty but does nothing any better than much cheaper packages. Apparently MIRA crashes regularly (-private correspondence with MIRA user). Most experienced imagers that I have had contact with are using one of 2 packages : <http://www.wvi.com/~rberry/> Note: should open a new window over this one. Richard Berry's Suite of software (designed for the Cookbook, but usable for any fits images) - half the cost of maxim, more flexible, batch processing, quadcolor, RGB, CMY color, unsharp masking etc. Quantum image is slow, but a new version should be out in a week or so, doesn't stack well, color not well handled. If you want the best then go to: <http://iraf.noao.edu/> Note: should open a new window over this one. A suite of free image processing stuff, runs on UNIX/ LINUX, THE BEST stuff available but hard to learn, what the professionals use!! I am buying Richard Berry's stuff, I LOVE the idea of having 200 Jupiter images and then just running a script to automatically process them all!!! I can have a coffee, then come back after it's finished an throw 199 images out, and concentrate on THE ONE good image for the night. Editor's Note: see the Software Topics in the left column of the Topical Archive's main page. Subject: Astro Image Search from Google
From: Ed Stewart <stargazer While doing a Google.com search, I noticed that there was a choice for searching for images at the top of the results page
by clicking Options or: So I tried searching on NGC6888 and got 268 hits. What impressed me was that each results page had 20 linked thumbnail images along with the image title, pixel dimensions, file size, and direct web address. So it is very easy to decide which to explore further. In thinking about it some more in a more general way, I realized that this was a very fast way to:
I'm sure there are other good reasons to take advantage of this search function-- check it out. Subject: True Technology Flip Mirror
From: Alistair Symon <asymon I am using the True Technology Flip Mirror with my MX5 CCD camera and LX10 scope. I am very happy with it. It has proved extremely useful in helping focus my CCD and centre objects in its field of view. True Technology sell a number of different adapters that allow you to attach each end of the flip mirror to just about anything. The flip mirror and adapters have been designed to achieve a low profile in order to minimize the amount of backfocus required to be able to focus your camera with the mirror attached. I use my flip mirror in combination with a f/6.3 focal reducer, a Celestron OAG and I am still able to focus the camera. There was no problem getting the flip mirror eyepiece parfocal with the camera and I have not noticed any sign of flexure. True Technology also sell a version of the flip mirror that includes a filter holder. This allows you to change filters without moving the camera or changing focus. It therefore provides a relatively inexpensive way to do tricolour imaging. The only word of caution I would give is that this flip mirror is specifically designed for CCD cameras. I believe it may cause vignetting if used with a 35mm camera. For a full description of the True Technology Flip Mirror check out the equipment page on my website. There is also a link to the True Technology home page there. <http://www.gushie.demon.co.uk/> Note: should open a new window over this one. |