CCD Misc. Topics --Page 1
MAPUG-Astronomy Topical Archive     AstroDesigns    MAPUG-Astronomy.net

CCD Miscellaneous Topics-- CCD 2  Page 3  Pictor/CCD Page 4

rule

Subject: Review of "CCD Imaging Techniques" Video

From: Ed Stewart <stargazera_tskymtn.com> Date: Dec 2001

Greg Pyros graciously sent me a copy of the new video tape his company produced for review. I'm very typical of a potential purchaser of such a video on basic considerations for getting started in CCD imaging since I have never used this type of camera or related equipment. My only knowledge of the subject comes from reading and editing the posts that come across this list and become a part of the MAPUG-Astronomy Topical Archive.

After popping the tape into the VCR, the opening scene shows the high production values as parts of telescopes fly from behind the viewer out into the foreground and assemble into three complete telescopes-- very cool! They are all Meades as they were kind enough to loan the equipment primarily used in the presentation. The first information given is very basic stuff about the designs of refractors, Newtonian reflectors, SCTs and Maks and mounts, but not much time is devoted to this so it passes quickly (not assuming anything on the part of the viewer is the best approach). As other concepts are presented, such as periodic error correction, battery power, dew prevention, off-axis guiders, polar alignment, etc., the broad picture of just how much effort and resources are going to be required begins to pile up. And along the way good tips are dropped in, like running the cables of the camera up to the Dec axis and then down to remove strain and possible weight shift. The tape ends with basic image processing where the concepts are graphically illustrated, like showing ten short images as a stack of pages that collapse down to a single image by the use of the stacking command go a long way to make a mental image of what is happening. Again, the production values are high.

I believe this tape is best for those who haven't made their purchase decisions yet, but are in the process of exploring what all is involved in getting into this part of the hobby. It reminds me a lot of a similar video presentation on building a log home that my wife & I purchased several years ago-- after it ended 1.5 hours later, we knew that the project was beyond our abilities, time, and resources. I think this video will do the same for prospective imagers-- it will either excite them into making the plunge or will make them pause to think about the commitment before investing hundreds or thousands of dollars.

My only suggestion to Greg would be to include the word "basic" or similar phrase in the title, but otherwise I think it is a well-thought out production, informative, and enjoyable.

For more information and/or to order: <http://www.AstroVideos.com> The price is $39.95

rule

Subject: What are the Definitive Book(s) about CCD?    Top

From: Doug Carroll <voxra_tattbi.com> Date: May 2002

There are really two good books about CCDs depending on what you want. One is "CCD New Astronomy" at: <http://www.newastro.com/newastro/default.asp> and covers everything about imaging.

The other is "AIP4WIN" at: <http://www.willbell.com/> and is a very detailed book that goes into the depths of CCD and how it works and to get the most out of the sensor. I have both and they are excellent books.

rule

Subject: Pixels per Arc Second Calculation --part 1 of 2     Top

From: Duane Baker <DBaker1047a_taol.com>

In Winter, 1995, CCDAstronomy, the article Optimizing a CCD Imaging System, has a chart of pixel/arc seconds/focal length, which is what I used to determine my scale.

The formula is: (M/FL)*206
Where M is the size in Microns of each pixel of the CCD Chip
Where FL is the focal length of the scope

Example: ST-7 CCD Camera with 9 micron pixel size + 8" SCT with 2000mm focal length =9/2000*206=0.92 pixels

Now if you were to use the f6.3 focal reducer you get: 9/1260*206=1.47 pixels

Now if you were to use Optek's f3.3 focal reducer (this reducer only works with f/10 scopes) you get: 9/660*206=2.80 pixels

This works for all focal lengths even when using a focal reducer.
As I understand the situation, the main reasons for larger pixels are:

  •    Larger pixels are more sensitive -- shorter exposure times.
  •    Larger pixels mean larger field of view -- more sky coverage.
  •    Larger pixels mean less-stringent guiding requirements.

The problem with too-large pixels is undersampling. The main downsides of undersampling are square star images and poor data sampling for analysis purposes. When an image is undersampled, star images can't be properly centroided, so the resolution of astrometry is limited (I do mean resolution, not accuracy, don't I?). Photometry is more difficult -- imagine that a star image is just under the size of a pixel. In this instance, some stars will be right in the center of a pixel and put all of their energy into it, while others might be right over a border, and split their energy over two pixels. Some might even be at "the four corners", and only put 1/4 of their energy into each pixel.

The best trade-off seems to be where the pixel size is about half the seeing. As I live at sea-level, I usually use a focal length that gives me 2.4 arc-sec pixels -- any smaller and my images don't have any more resolution, they just take longer to integrate and cover less sky. An approach that I have used with some success is to undersample an image -- say by using a telephoto lens -- so that I get a nice big swath of the sky, and then resample and smooth the image to make the stars look less square.

BTW, it seems to me that the actual size of pixels is not an issue -- the question is the size of a pixel in terms of sky-coverage. Since changing the focal length of the system changes the effective size of the pixels, so the question seems to really be "What focal length do I want to use?" For a discussion of this, take a look at the Apogee site's CCD University: <http://www.ccd.com/> Note: a new window should open over this one.

-------------------------------------------------

Subject: Pixels per ArcSec -part 2    Top

From: Chris Fry <cfryea_tix.netcom.com>

2 arc-secs per pixel is about optimum for ccd imagong in average amature seeing conditions (below 5,000 feet elevation). Here is the equation:
    Arc-secs/pixel = 206,265 secs-per-radian x ccc-pixel-size (in mm)/focal-length.

Your scope has a focal length of 10" x f10 = 100" = 2540mm. So:
    Arc-secs/pixel = 206,265 secs-per-radian x .009mm/2540mm = .7308602

This is to small for average seeing conditions at f10.
   Using an f6.3 focal reducer will reduce your focal length to 2540 x .63 = 1600.2mm.
   So: Arc-secs/pixel = 206,265 secs-per-radian x .009mm / 1600.2mm = 1.1600956.

Q.E.D. The focal reducer gets you closer to the ideal goal of 2 arc-secs/pixel.

Actually you would be better off with an "Optec f3.3" focal reducer; that will get you: Arc-secs/pixel = 206,265 secs-per-radian x .009mm / 838.2mm = 2.2147279. This is almost ideal, as it is near the target 2 arc-secs/pixel!!! You can take ccd images at f10 with 9 micron pixels, and they will probably look ok, but yu will be over-sampeling and therefore reducing your sensitivity and introducing unnecessary noise into the image.

rule

Subject: Calculating CCD Pixel Size     Top

From: Doug Carroll <voxra_tattbi.com>

Hi Rick, I think this is what your after, I want to know the same thing so I created a spreadsheet to figure it all out for me and posted to my website. Take a look at it and see if it is what you need. You can download the spread from this site:

<http://www.ccdguy.com/other/ccdpss.htm>

rule

Subject: Calculating Pixel Size Needed

From: Doug Carroll <voxra_tattbi.com>

-----Original Message-----
From: Doug David:
I have a Meade 8" LX200 and am considering a Starlight Express MX7C for astrophotography. My question is, for an 8" LX200, which focal reducer would perform better for deep space pictures, the f/3.3 or f/6.3? Pixel size of the MX7C is 8.6 x 8.3uM. If I am correct in using the following to obtain 2 arc seconds per pixel (formulas from Starlight express)

F = Pixel size * 205920 / Resolution (in arc seconds)

In the case of the MX7C and a 2 arc seconds per pixel resolution:

F = 0.0082 * 205920 / 2 = 844mm

For a 203mm (8") SCT, this is an F ratio of 844 / 203 = F4.15, which falls right in the middle of the F/3.3 and F/6.3 reducers. So which one would perform better?

-----Answer-----
I currently use a Genesis camera with 9uM pixels and image at f/10 and 2x2 binning and get great results you don't have to image at 1x1 binning so you have the added flexibility of combining the two (binning and f ratio) to get the pixel size you want. The 2 pixel is not a rule--it is a guideline based on the average seeing of sites. Ultimately I am going to be using a 6.3 and 3.3 reducer as well I have the 6.3 already.

If you want to check of combinations I have a spreadsheet that I did to do this with different optical configurations at:
    <http://www.ccdguy.com/other/ccdpss.htm>

rule

Subject: "Seeing" vs. Pixel Size      Top

From: Doc G

I have followed this thread about seeing and pixel size with interest. Let me take a slightly different tack on telescope focal length and CCD chip size. I have both a 12" f10 and a 10" f6.3 telescope (LX200s) I also have the 0.63 field reducer and the Optec 0.33 field reducer. Why?? I feel that the way to think about imaging is to make a list of the object sizes you are interested in. This will vary from planets that are a few to a few dozen arc seconds to nebula and galaxies that are as large as a few degrees. It is clear that no single telescope with a given CCD chip and vice-versa will be suitable for imaging all of these objects. We need a few thousand millimeters focal length to fill a Kodak 400 chip with an image of say, M57. For a planet we need not only 3000 mm focal length but a strong barlow to get a nice sized image on the same chip.

If we want an image of the larger nebula or M31 we don't need a telescope at all. We need a medium focal length camera lens. This said, I have a philosophy for imaging which is simply this. Pick a focal length which gives an image size that uses up most of the chip area. That's it! If you use most of the chip area, you get the most detailed image you can. This is what I learned from photography. You choose the perspective first and then you choose the lens focal length to fill the frame. You need to use most of the chip area for the image in order to get an image that is not "pixelated."

It is all very simple but you need a lot of different focal lengths to cover the objects from Jupiter or Mars to M27 or M31.

rule

Subject: Image Size in Microns --part 1 of 2    Top

From: John Mahony <jmmahonya_thotmail.com>

Art Morton wrote:
>Thank you for your responses. I use a MX916 imager, which I believe has
>pixels that are 11.2 by 11.6 microns. I have not done the correct math to
>relate my most focused star images to (calculated theoretical) size, but it
>seems upon measuring star images with AIP4WIN (star image tool) that most
>of the time stars are not really pin point, but blobs. I am using a LX200 at
>f/6.3 that is collimated using CCD images.
>
>It does not matter if I use a computerized mechanical focuser with FocusMax
>or focus by hand, the smallest star image regardless of Max Pixel Value
>seems to be 3, 4 and 5 pixels across in the X/Y axis. Very, Very rough
>calculation would put the star images on the CCD at 35, 47 and 58 microns.
>
>I am just starting to investigate what the optics of a SCT can do, and after
>reading the advertisement of Meade, (Diffraction Limited Optics), just
>wondered what that meant in real world seeing. I have not taken in the other
>real world affects like the atmosphere, angle of where the star was when
>imaged or other factors. I was just wondering what is the starting point of
>potential star focus size.

I use CCD on a 12" LX200 for asteroid astrometry (discovery and follow-up tracking of asteroids). Star size is important there because you're trying to measure the position of the asteroid to a few tenths of an arcsecond. But it's normal for the image to be a few pixels wide (at 2" per pixel), primarily due to atmospheric turbulence. A more useful measurement is FWHM (full width half maximum) which means the width at which the intensity (at the edge of this width) is one half the peak center intensity. This number is usually much smaller, and is generally independent of a star's magnitude, as long as the star's image is not saturated. More generally, a psf (point spread function, basically a bell-shaped intensity curve) is fitted to the star images, and this allows sub-arcsecond accuracy in determining position, even though that's much smaller than 1 pixel.

--------------------------------------------------------------

Subject: Image Size in Microns --part 2 of 2

From: Radu Corlan <rcorlana_tpcnet.ro>

Dennis Persyk wrote:
> I have pondered your question a bit more and have come up with an
> alternative answer: Dispersion.
>
> Here's what I observe:
> Bright class O stars bloat the most.
> Problem worst with camera lenses piggyback; least at Cassegrain focus.
> Problem mitigated with IR blocking filter or stopping down lens 3 stops.
>
> So perhaps it is simple chromatic aberration. Perhaps the class O stars
> produce the greatest signal in the blue and this exacerbates the problem as
> the CCD is least sensitive in the blue and most sensitive in the near IR.

It's almost certainly chromatic aberration. The O stars put out a lot of UV and violet light; even if the CCD is less sensitive there, there's still enough response to bloat the stars.

Once you get near 350nm, even a corrector plate or filters/windows can introduce chromatic aberration.

Seeing is also generally worse at shorter wavelengths.

It's apparently paradoxical that the IR filter improves bloat on blue stars. But i think this paradox can be solved: when you focus the camera with the IR filter, you adjust optimum focus for a bluer region, so the bloat is less. Most lenses don't correct for infrared, so when you focus on the full spectrum, you tend to shift the focus towards the infrared position.

rule

Subject: Flat Field/Dark Frame     Top

From: Gary Campbell

A member wrote:
>The Meade manual states:Take Dark Frame-Cover the Telescope. This seems
>simple enough. But what about "Take Flat Field". What exactly does it mean
>to "Prepare the Telescope"?

Blow off taking flat fields for awhile. They're hard to take effectively. And they aren't normally required to get some pretty good pics. After your comfortable w/ everything else in your imaging, then try them. "Prepare the Telescope" undoubtable refers to pointing the scope at a flat field (see below). Below are some of my experiences:

In a flat field exposure you want to image a very evenly illuminated field. The best way I know to describe this is liken it to a 'photographer's grey card'. Some people use a twilight sky as their grey card. While others actually some sort of card. By imaging this very flat field, you'll record unevenness in your optical path and within the chip itself. You can then divide this pixel info into your 'real' image. All chips and most, if not all, optical systems will have some unevenness. Maybe the biggest benefit is you can 'flat field out' little artifacts caused by dust on your imagers glass window. These artifacts will resemble little black donuts. Once you see them, you will instantly recognize them. Proper flat fielding can also reduce the effects of vignetting.

Taking decent flat fields is harder than it seems. You want to expose your flat field image long enough to obtain pixel values somewhere near 2/3s the saturation level. For an imager using the KAF400 chip, such as the Pictor416, this would be pixel values around 45,000. If you're using a different chip, calculate the 2/3's amount. You don't have to be real close to this, but it's a good rule of thumb. What you want to avoid is (1)your flat field image having saturated pixels, or (2) the pixel values being so low that you don't get enough info. Hence the 2/3's generalization.

To be able to make a usable flat field image, your camera has to be in exactly the same position as it was during your 'real' imaging. This means leaving the camera on your scope once you start imaging. Removing the camera, then putting it back on, will make flat -fielding very difficult, if not impossible. So will changing the focus. So most people end up taking flat fields at the end of an imaging run.

If you're taking flats of the sky, you need to wait until dawn. This way you can get the amount of illumination you need to reach 45,000 pixel counts. Timing is critical. If you shoot too early, your sensitive chip will record stars. If you wait too long, the brightness of the morning Sun will saturate the pixel values. And in either case, your flat field images will be worthless.

Or if your taking flat fields of a 'grey card', if your timing is good, you can use the ambient light at dawn to illuminate the card. Then adjust your exposure to reach ~45,000 counts. Timing is slightly less critical using this method. But only slightly. You'll really notice how fast the Sun comes up after trying this.

If you can't wait till morning, it's possible to bounce a light off something onto your gray card. Much like bouncing a SLR camera's flash unit off a ceiling. A flashlight beam bounced off a white shirt works well. Again, use your length of exposure to obtain the ~45,000 value. This method will *not* help your chances of winning a popularity contest if your observing w/ others <g>.

This previous method is fairly simple for people w/ observatories. Once you done it once or twice, and get everything set up, it's repeatable. So subsequent nights are much quicker.

A third method involves making some sort of light box to put on the front end of your scope. This is harder than it sounds because it's not easy to diffuse the light and get a perfectly illuminated field. Plus your light source can provide the wrong kind of light. I wasted a lot of time trying this method and gave up unsuccessfully. Improperly flat fielded images will make your pics look far worse than better. Averaged, or better yet -median combined, flat fields provide better results than single exposure ones. Same holds true for dark frames.

rule

Subject: CCD Selection Questions     Top

From: Dick Green <dick.greena_tvalley.net>

1. Why would one pay the same for an ST-6 with grainier images, and no self-guiding, as the ST-7? What is it's strength? Please explain in layman's terms.

Hard to explain in layman's terms, but I'll try. Ask yourself this question: When I image a star, just how small a spot of light will it cast on a CCD chip? In theory, the spot should be a point of light because at stellar distances, the light is coming from a point source (i.e., a star is too far away for you to be able to see the actual disk) In practice, of course, many factors conspire to make that spot bigger than a point. For example, I'm sure you've noticed that when the seeing is really good, and your telescope optics are clean and well collimated, and your focusing has been really precise, stars appear as sharp "pinpoints" of light. Under such conditions, you can see the airy disk of a star with a high magnification eyepiece. On other nights, when the seeing isn't so good, or your telescope optics are dirty or a little out of collimation, or you just can't quite get things in focus (because it's 30 below), the stars look a bit larger or smudgy and you can't see the airy disk at all. Sometimes just the seeing alone will be so bad that a star looks blurred in a high mag eyepiece -- i.e., it makes a bigger spot.

Of course, I don't have to tell you that your LX200 doesn't have perfect optics, either. The primary mirror is relatively small, there's a BIG central obstruction, and the figure of all of the optical components probably doesn't compare to, say, one of the Keck scopes. That makes the spot bigger, too. To make matters worse, when you do a time exposure with a camera or CCD, imperfect tracking and/or flexure in the mount is going to smear or wobble that image. So the spot gets bigger. One can summarize the factors thusly: Focus, Optics, Seeing, Tracking, and Experience (of the operator). Hey... that almost spells FOSTER, which is the title of Software Bisque's treatise on the subject (the R stands for Review -- wierd --) Guess it's obvious where I learned about all this...

Anyway, the theory is that the best resolution is obtained when a star image takes one or two pixels to represent. Let's assume that it's a great night and you've done everything you can to optimize Focus, Optics (clean and collimated), Seeing, Tracking, and your own Experience. Then it boils down to how well the specific characteristics of your optical path match up with the size of the pixels used in your CCD. If the best your optical path can do is make star images that are 50 microns in diameter, the 9 micron pixels used in the 416 and ST-7 won't improve resolution over the roughly 27 micron pixels (they aren't square) used in the ST-6.

You might say, "So what -- the 9 micron pixels certainly don't degrade resolution." True, but they are a whole lot less sensitive to light than the 27 micron pixels. You can easily see this if you use 2x2 binning for images from a 416 or ST-7. The resulting 18 micron pixels take about 1/4 the exposure time of the unbinned 9 micron pixels. Longer exposure time is a real liability because it requires much more precise tracking (that's why the 416 and ST-7 come with autoguiders), steady seeing throughout, and no mirror flop. The problem is exacerbated by the fact that the blue light sensitivity of the KAF-0400 chip is relatively poor. And tri-color images? Oh boy. Now you need precise tracking through three long exposures, you've introduced filters that reduce the light reaching the chip, and the poor blue sensitivity just kills you.

Finally, the ST-6, big old rectangular pixels and all, actually has a much larger field of view than the ST-7, but with a lot fewer bits to transfer to the computer. Typical exposure times are short enough that autoguiding is needed less often. As for graininess, try comparing a well-exposed ST-6 image with an underexposed unbinned ST-7 image. The latter is much grainier. It takes a lot of precision work (and some luck) to get that "photo-like" quality from an unbinned ST-7 image.

This is not an ad for the ST-6. In fact, I have an ST-7 because I don't like square stars, I couldn't bear the thought of waiting for very slow serial port transfers, I wanted to do long autoguided exposures, and I didn't know all this when I bought the camera. Maybe someday I'll get a better telescope to go along with my CCD camera. What with all the time it takes to optimize tracking and take long exposures, it should come as no surprize that I built a permanent pier -- both for improved stability and minimum setup time.

One last thought -- from what I've read, the ST-6 is a better choice for photometery due to the better blue light sensitivity and short exposure times. Square pixels aren't a problem in this case.

> 2. You mentioned using a barlow .. does that mean the the CCD physically pops right into the star diagonal? the visual back? Where exactly to you put the darn thing.

The 416 and ST-7 both have nosepieces that are just like the barrel on an eyepiece. You stick the nosepiece into the visual back and use the thumbscrew. If you use a barlow, it goes in the visual back and the CCD nosepiece goes into the barlow. The single thumbscrew mounting is a lousy way to mount a CCD camera (the camera can rotate on it's axis, requiring a new flat field image, and do you want one little screw holding your $2,500 camera in place as the scope slews wildly around the universe at the end of a long moment arm formed by the barlow?) The ST-7 has T-threads, so it can be mounted directly to the rear cell or focal reducer. This makes for a much sturdier mount, but you can't use it with a barlow.

> 3. If one took a camera, and mounted it a prime focus, it would have a very
> wide, rich field. Aside from the obvious inability to take in a lot of data,
> why are CCD images so much smaller in area size, but so much larger in terms
> of magnification? In other words, at f/6.3, on an 8" M51 looks like a huge
> whirlpool, yet if you were to use a camera at prime focus, it would come out
> as a tiny little thing. Is that a limitation of emulsion?

The size of the image projected in the chip and the film is the same, but the chip is a whole lot smaller than a piece of 35mm film, so its field of view is much smaller. M51 fills up almost the whole chip, while it's just a tiny spot on the film. The chip image, which is almost all M51, is blown up to nearly the full size of your computer screen (maybe about x 11 inches on my screen), while the film, with a tiny image of M51 in the middle, is blown up to maybe 4 x 5 inches. If the grain of the emulsion if fine enough, you should be able to blow the image up to the same size as the CCD image, with much better resolution.

> 4. With short shots, can one get away with alt-az. alignment on the LX200
> rather than Polar.. it is much easier for me, I have seen this done
> successfully in 2 minute shots, but would like more info ( I know we have gone
> over this before, but we seem to be more experienced now as a group, and would like more opinions on it)

Sure, it works fine. But longer exposures will show field rotation, so it has limited utility. It would be nice if Meade came out with a field derotator for the small scopes like the one on the 16". I, for one, would much prefer the stability of the ALTAZ mount without the wedge. The drawback is that you can't use a tube-mounted guider

rule

Subject: Equipment & Software Used for CCD Imaging     Top

From: Mark de Regt <deregta_tearthlink.net> Date: Aug 2002

Michael Wyatt asked:
> Could you provide more details as to your setup and how you
> were able to get such a great image. Did you have the LX200 PEC
> trained and what focal reducer did you use and how was it located.

From the ground up, here is my equipment list:

  1. I set-up on grass, not preferred but my only available option at the moment.
  2. Three one foot square ceramic tiles (the kind normally used for tiling floors).
  3. Celestron "shock absorbers" for all three feet (I have removed the rubber pads from the feet of the tripod).
  4. Meade Giant Field Tripod (the one for the 12"); I only just acquired this, and all pictures on my website other than the NGC6946 were made with the Standard Field Tripod. I got the Giant Tripod with the plan to use it as a pier, just leaving it in place, with the wedge attached).
  5. Milburn Wedge--a real quality product, both in form and function.
  6. 10" LX200 (four years old); non-stock equipment on the scope: a two pound weight taped to the East fork arm; an NGF-S focuser; a RoboFocus stepper motor for the NGF-S focuser (the RoboFocus has only been attached for the last four images I have made--M63, M82, M51 and NGC6946); rails top and bottom from Bonnie Lake Astro Works (Ken Milburn's shop), with various weights for balancing at different points in the sky; a mirror stabilizer lock,
    from ScopeStuff <http://www.scopestuff.com/>;
    an EZ Focuser from Peterson Engineering <http://www.petersonengineering.com/SkyDiv/sky_division.htm>,
    which is a must for good control of the difficult SCT focuser.
  7. Meade .33 reducer, without spacers, yields an effective focal ratio of a bit over f/4.6.
  8. I should use the Hutech light pollution filter, which I have, but I have not used it yet.
  9. SBIG CFW8 color filter wheel.
  10. SBIG ST-7E CCD camera--aside from the high quality of the camera, the built-in guide chip is a lifesaver.
  11. Software used in image acquisition--CCDSoft v5 from Software Bisque <http://www.bisque.com/>. This is a very nice piece of software, particularly in that it allows one to orient the camera at any angle and still guide as well as the mount can. It also includes a_tFocus, a very good automatic focusing routine. Finally, working with TheSky, also from Bisque, one can figure out precisely where a guide star is, which saves a lot of blind searching.
  12. Software used in image processing--AIP4WIN, which I typically use for calibration (darks and flats), alignment, Richardson-Lucy deconvolution, and Digital Development Processing (another sharpening routine); CCDSoft can be useful for alignment (but I don't have a lot of luck with it), and it is good for image combining (combining the subimages into one image, after calibration and alignment). Final image combine, tweaking levels and curves, and some useful filters are to be found in Photoshop. AIP4WIN is, IMO, a must have for all imagers, and is a bargain at well under $100. Although it's not software, another must is Ron Wodaski's book "THE NEW CCD ASTRONOMY" <http://www.newastro.com/newastro/default.asp>.

As to how the images turn out well, it is a combination of a lot of things. First, PRACTICE. While one is typically able to get recognizable images the first night, it takes a lot of work to achieve good focus, combine images effectively, and process them optimally. I still have much to learn, but I have benefited from the second imperative--doggedness. Your images will only be as good as the effort you are willing to put into them. I started imaging in the Spring of 2001; it took a couple of months before I felt good enough about my nascent skills to attempt a color image, and it took several more months until I started getting pretty consistent results. The other thing which cannot be overstressed is the help one can get on the internet groups, in my case the SBIG group on yahoogroups. The world's best imagers are regular contributors on the group, and are happy to answer questions. Particular kudos must go to Ron Wodaski, who is tireless in helping out, but many, many others post frequent responses to us newbies as we claw our way up the learning curve.

You asked particularly about PEC training. In doing so, you hit on the biggest sore spot I have in trying to image with the LX200. I believe that the RA mechanism is not machined to particularly high tolerances, and PEC is not particularly effective in getting rid of the odd bumps in the mechanism which appear at various times. My next significant purchase will be an AO-7 from SBIG, which seems to be quite effective at neutralizing much of the annoyance of the less-than-perfect LX200 mount/drive system (don't get me wrong, I think that the LX200 is a marvelous value, but it does have its shortcomings).

Even if you acquire quality data, you cannot get a good final result without careful image processing. All of my images have pretty detailed descriptions of the image processing I have performed on them, but it is a long, hard road learning processing, and I still am feeling my way on that.

Astro Images: <http://home.earthlink.net/~akilla/MAD/>

rule

Subject: Useful Prime Focus Camera Adapter   Top

From: Bill Nicoll <billa_tbillnicoll.com> Date: Sep 2003

I recently purchased a 2" Prime Focus Camera Adapter, made by Orion, which I find to be very useful. One end has a standard T thread which screws directly to the front of the CFW8 filter wheel on the front of my SBIG ST-7 camera. The other end has a 2" metal barrel. A unique feature is that the barrel unscrews from the body of the adapter and the two sections have mating Schmidt threads. This allows me to sandwich a Meade Focal Reducer and/or a deep sky filter between the two halves of the adapter. In addition, the front of the adapter barrel is internally threaded for standard 2" filters.

The complete stack on the back of my 12" LX200 for imaging is as follows: Eyeopener Visual Back, JMI NGFS Focuser, Orion Prime Focus Camera Adapter with sandwiched reducer & filter, and the CFW-8 coupled the to ST-7 camera. I have no problem achieving focus and have the largest possible diameter light path back to the camera.

I purchased the adapter from Oceanside Photo & Telescope.

rule

Subject: LX200 Rear Cell Adapter   Top

From: Dave Schanz <dave23scha_tvalleytranscription.com> Date: Mar 2004

----- Original Message -----
From: Clifford PETERSON
Does anyone know who makes a 3-3.25" to 2" rear cell adapter for a classic LX200 10"?

Cliff, I bought this adapter. It's very well made, and I like the compression ring with the three thumbscrew set up. It's very solid: <http://www.buytelescopes.com/product.asp?pid=5669>

rule

Subject: Camera Connections to Avoid Vignetting   Top

From: Roger Hamlett <ttelmaha_tntlworld.com> Date: Apr 2003

----- Original Message ----- From: Lawrence Harris:
> After spending quite some time I am still not managing to fit my MX916
> CCD to the LX200 without getting considerable vignetting. I have used the train:
> LX200 - NGF focuser - f/6.3 converter - T-adapter - CCD camera
> But the result gives excessive vignetting. What I really want is the
> camera right next to the converter, but the adapter is several cm long
> and I think this increases the vignetting.
> I then tried to use the camera with a filter but found it impossible to
> find a combination of fittings to allow this. Someone must have an answer here?

Start at the beginning. If you put the camera 'right next to the compressor', then you might as well not have the compressor at all!... The compression ratio, is dependant on the spacing. The formula, is: rf=(f-s)/f

Where 'f' is the focal length of the compressor, and 's' is the separation. The f*.63 compressor has a focal length of about 240mm, and is designed for a spacing of about 90 to 100mm. The spacing here is the distance to the optical centre of the compressor lens, but the distance to the corner where the threads end on the compressor is close to the right distance. So if you put the camera tight to the compressor (reducing this distance to only perhaps 40mm to the CCD), you would get a compression ratio of only (240-40)/240 = 0.84*

The next question, is how bad the vignetting is. It is worth understanding, that even without a compressor, the scope itself will not produce a completely 'flat' illuminated field. Many scopes have a 'fully illuminated' field, that is only a very few mm across, and will display some vignetting beyond this diameter. However the amount is normally small. If (for instance), the illumination varies by a few hundred counts, out of several thousand, this is both normal, and acceptable (and easily corrected by using a flat field). When looking at the causes of vignetting, you need to work out which part of the light path is the likely culprit. So if you imagine a situation, where you have the optical train as you describe, with the separation from the CCD to the compressor at (perhaps) 90mm, and the final optical ratio (measured using the drift method, or by scaling off a star image), at f/7 (the scope itself will be generating a slightly longer than quoted focal ratio, because of the increased back focus distance), then if you work 'forwards' from the CCD, things progress as follows: At the CCD, the field required is 10.9mm diagonally. The cone of light needed to illuminate this, tapers at f/7 (from the imaginary measurement taken), forward from here to the compressor. So over 90mm, will taper outwards by 12.9mm, bringing the clear field needed to avoid vignetting, to 23.8mm at the compressor.

This is still comfortably smaller than the clear diameter of the compressor module. From this point forward, the cone now tapers at perhaps f/11, so by the time you reach the front of the NGF-S (perhaps 100mm forwards of the compressor?), the field diameter has risen to 32.8mm. From here, the light cone carries on tapering out at the same rate, down the entire length of the telescope, to the secondary (you do not say whether this is a 8", 10", or 12" scope, which changes the length, and the diameter of the baffle tube). If at any point, the outside edge of this imaginary cone, is larger than the available diameter, then vignetting will result. In the case of an 8" scope, the diameter needed will rise to perhaps 75mm at the secondary, and will be slightly vignetted, not by the compressor, but by the front edge of the baffle tube, and the secondary. However the amount involved, will be less than 10% (in intensity measurement), and it is better to have this, than to be increasing the secondary obstruction...

There is a balancing act, between the size of field that can be used, the blocking of 'stray' light, and the size of the central obstruction. Most image programs, make vignetting look far worse than it actually is (by applying automatic 'stretches' to the dark parts of the image), and a bit of careful processing, can comfortably deal with normal vignetting levels.

rule

Subject: CCD Focusing Aid      Top

From: Bruce Johnston

My method of focussing either my Cookbook camera or ST-7 is to get close with an eyepiece as you now do, then I flop a cardboard mask over the end of my 10" LX200, the mask having two round holes in it, each about 1" or so in diameter.

Making the mask is easy and takes about 10 minutes. I just lopped off the end of a cardboard box that was more than 10" to a side... leaving about 2" of the each side of the box intact. I then cut off two of these sides. So what I then had was a flat box end with two edges, of the original box. I flopped the box end, over the end of my scope so that the two "sides" were resting against the tube. A quick session of measuring showed me where to cut the two 1" holes so that they'd be on opposite sides of the central obstruction, and equal distances from it.

From then on, to make a very fast focus, I aim at a bright star, begin downloading and displaying the star, and I see two representations of the star. The further from focus, the further the stars are separated. Just tweak the focus a bit at a time ... I use a JIM Zero-image-shift focusser to keep the star from shifting during focus -- and presto, when only one clean star is visible... perfect focus! Takes maybe one or two minutes, max, to get focus bang-on. Quick and cheap!

rule

Subject: An Improved CCD Focuser Mask     Top

From: Paul Gitto <PaulGittoa_taol.com>

I've recently developed an improved focusing mask for CCD imaging. After working with diffraction focuses and Hartman masks, they were still time consuming.

Basically a focusing mask is an opaque material, placed in front of the telescope's objective lens. Usually 2 holes are placed on opposite sides of the mask to let light through. When the 2 images merge as one, as seen in an eyepiece or CCD camera's image, the telescope is focused. I have determined that placing a triangular hole on an standard 2 hole mask, simplifies the process. The third hole is used as a guide to inform the user if the focus has gone beyond the desired focus. If the focus has gone beyond the focus point, the triangle will move to the opposite side of the focus point.

First, the triangular hole is placed at the North (top) of the telescope's objective lens. The 2 round holes represent East & West. The telescope should be pointed at a fairly bright star. The first images are at a low resolution to speed download time. Even with the telescope way out of focus, the mask will automatically guide you as which direction you need to turn the focuser. The triangle will move either north or south, depending which direction the focuser is turned.

Images and further explanations can be found at my webpage:

<http://www.cometman.com/> Should open a new window over this page.

rule

Subject: Color Filters Used for Three Color Imaging      Top

By: Doc G

For 50 years I have used Wratten color filters for photography and have done everything from minor color correction to color separation photography. This includes minor color correction to correct for tungsten to 3200 K or from daylight to 3200 K. I have also spent several years in the 1940s and 50s doing three color separation photography. I believe that I understand color correction and three color photography quite well. With that long introduction, I will make a few comments about the use of filters for astronomical imaging using both film and CCD imagers.

The topic of color filters has become of interest to several CCD imagers on the MAPUG site recently as a result of questions about the thickness of filters and the exposure times required in order to obtain correct color balance. A list of the thickness of some filters is appended at the end of this note. I became interested when I purchased a set of color filters from Optec and found a rather large ratio of red to green to blue exposures for this filter set. As an aside, which I will not discuss, I note that the color filters provided by Meade with their color filter wheel are not all of the same thickness and thus all but impossible to use. This problem was addressed by J Hoot in a recent post. However, in the same note he recommended a set of Wratten color filters which included the choice of an 80A. I was quite astonished by this recommendation and will discuss this choice in the following comments.

It must be noted that the choice of filters depends upon the application including elements about the spectral sensitivity of the film in the case of photography and of the CCD chip in the case of electronic imaging. There is probably no best set of color filters to use in general but there may be a best set to use when all factors are known. In a recent experience, I heard a talk at the Florida Keys Winter Star Party in which images very much more blue that I have ever seen were presented with the comment that these images were more accurate than most of those seen in the past. They were beautiful.

We do not know, in terms of what the eye might see exactly what color the many objects in the sky really are. Most dim extended objects appear gray to out eyes. I recently saw M57 through a 40" scope and for the first time I saw slight color. For bright objects, like stars and planets, we do of course see color. Obviously the lack of color is a defect of the eye. The objects of course have color which may be captured with film and CCD.

Color filters may be of two types. Multi-layer dichroic filters can have very sharp and well defined pass bands. Wratten filters are dye controlled filters and have pass bands which are in general much less sharp. As a starting point, and to limit this discussion, we will assume that a three color system of imaging will be used. This will require a set of three filters that are broadly Red, Green, and Blue. There are numerous sets of RGB filters that are traditionally used for three color imaging. We must also note that the reconstitution of the image might be on a computer screen with RGB colors of their own hue and saturation or by printing processes which all for their own color representation and balance.

Common sets of RGB filters for photographic purposes are Wratten filters: Some possible sets are: (in order RGB)

  • Separation filters 29, 61, 47
  • Tricolor filters 25, 58, 47

Interjection: Because the filters will be represented by numbers, a brief note is interjected at this point to describe the filter colors as they are described in the Kodak manual:

Red filters:

  • 23A, Light red.
  • 25, Medium red. Red tricolor. For color separation work.
  • 29, Deep red for use with No. 58

Green filters:

  • 56, Light green.
  • 57A, Medium green. (but lighter than the 58)
  • 58, Medium green. Green tricolor. For color separation and tricolor printing.
  • 61, Deep green tricolor. for use with Nos. 29 and 47

Blue filters:

  • 80A, Light blue. Color correction to convert from 3200 K to 5500 K light (tungsten to daylight)
  • 38A, Medium blue. Absorbs red and some green.
  • 47, Deep blue. Blue tricolor. For color separation work with Nos. 29 and 61.

Because of the way these filters, all of which are pass band filters, (Except the 80A which isa correction filter used with color film.) behave in the pass and stop bands and particularlyhow rapidly the cut off, they will produce distinctly different color balance in the finalimage. Astronomical, printed color images rarely give all of the information necessary to beable to judge the accuracy of the color displayed. Often though the type of film, if color film,and the exposure times are given as well as the technique used to generate the image. Malingives great detail about these matters in his book. But even then the variety of photographictechniques used is largely directed to make nice looking images. That is, nice detail, nicecolor and the like. Photometric color accuracy is not usually the goal.

An example of deviation from the normal (traditional) filter set is suggested by Wallis andProvin, whose color images are quite wonderful. They suggest using the Wratten filter set forphotography of 23A, 57A and 47. This set has a slightly lighter red filter and a slightlylighter green filter. Their filter suggestions gives a smoother coverage of the spectrum whenused with monochrome film and fills in the spectral gaps where there is appreciable astronomicalinformation. They also recommend an exposure ratio of 1:1.5:2. This seems to me to be a wisechoice of filters and exposures.

We must realize that the spectral response of the CCD chip is very different from that ofphotographic film. CCD chips of the types used in popular amateur imagers are highly sensitiveto red and infra red and very insensitive to blue light. This means that images taken with thetraditional filter sets used for photography will require very long exposures for the blue filtercompared to the red filter in order to obtain good color balance. Additionally, the infra redportion of the spectrum must be suppressed or it will alter the color balance of the compositeimage. Note that newer blue sensitive chips are being worked on but are currently expensive andnot generally available in amateur imagers.

Exposure ratios for the RGB of 1:2:4 or even 1:3:6 are not uncommon. It is generally required for accurate color rendition to use an infra-red reject filter since leakage of infra-red through the color filters will spoil the color balance of the image. Infra-red rejection filters are of two types. The "hot mirror" or multi-layer type has very sharp rejection characteristics and is generally considered the best. Another type is simply heat absorbing glass which works fairly well but has a very slow cut-off and not complete absorption of the infra-red wavelengths.

There is still considerable difference among professionals about the exact filter sets to use. Clearly it depends on the exact spectral sensitivity of the CCD chip and the spectral accuracy required by the particular application. All manufacturers on filters sets have shown images which are very beautiful. The color balance is controlled by exposure times and the color balancing attained in the reproduction of the images as much as in the original choice of filter set.

This bring me to a few final comments. I have a set of filters from Optec, a company well known for its excellence in photometry, which has a light red filter, a normal green filter and a very dark blue filter when compared visually to the normal tricolor or separation filter sets. It appears that this filter set requires a very long blue exposure both because of the density of the blue filter and the lack of sensitivity of the typical CCD cell. I have seen a filter set consisting of 25, 58 and 38A recommended. This seems to me to be a very good filter set for CCD imaging since it has a slightly lighter blue filter than the 47 recommended for photographic film. Because the CCD chip is so insensitive to blue, the exposure for the blue image should not have to be unreasonably out of proportion to the red and green exposures.

On the other hand Mr. Hoot has recommended a filter set consisting of a 23A, 56 and 80A. The 23A is a light red filter, the 56 is a light green filter and the 80A is a very light blue filter. In fact the 80A is not really a band pass filter at all but a color correction filter used to change 3200 K tungsten light to daylight balance of 5500 K. It passes 80% blue and 20% red and is used to correct daylight color film for use with 3200 K (tungsten) illumination.

This does not mean that this filter set is necessarily bad. This set has an exposure ratio of 1:1:1. Such a ratio is useful since it is conserves some time to take a blue exposure which is shorter than that usually recommended for blue filters. The total time required for the blue image so often nearly half of the total time. It was also stated that this filter set gives nice looking images rather than color accurate images. Since the red and green filters are also lighter than usually recommended light, The whole set gives overall shorter exposure times. Since the blue filter is very light it gives an image which is heavy in blue but with considerable green and red. The very lightness of the blue filter makes up for the lack of sensitivity of the CCD chip in the blue region of the spectrum. If this set of filters gives nice looking color images, that is fine. But, we should have no illusions that it is accurate color. For astronomical color imaging a great latitude is allowable since no side by side comparison can be made as it can with photographic imaging of earthly objects. We are happy to see images that are not just gray. A photometrically accurate and set of color filters combined with the lack of blue sensitivity of the CCD chip requires an almost excessively long blue exposure. We may have to wait for the general availability of blue sensitive CCD chips to solve this problem.

Then again what is accurate color? Is it the image in one book or another, the color slides I saw a year ago, the color I see on the computer screen, the color I saw at the Winter Star Party or some other color balance? It's hard to tell. One might now say, what about the color that color film shows? Is that accurate? Not so simple to say. Color balance is a strong function of exposure time and reciprocity failure often dominates. Color film can give quite unbalanced color images as well. Professional films are made for short exposures of under 1/10 second and for exposures of several seconds to insure controlled color balance.

It is probably best to judge a color image on the basis of its detail its range of colors and its general beauty. These are subjective judgments indeed. Filter thickness is an important issue when taking three color images. The filters must be exactly the same thickness or there will be an image focus shift at the imager. Filters vary in thickness considerably. I have always made a practice of keeping a record of filter thickness for my photographic work.

For a reference, here are some values for a few 2" filters I use.

  • Lumicon Deep Sky Filter 2.55 mm
  • Lumicon UHC Filter 3.30 mm
  • Lumicon H alpha pass 2.18 mm
  • Lumicon Minus Violet 1.90 mm
  • Clear Filter 2.60
  • Clear filter 3.30

I keep these to match the Lumicon filters above when used in the Optec 2" filter slider.

  • Optec Hot Mirror (minus IR) 4.25 mm
  • This filter is used with all color filters to get rid of IR
  • Optec separation filters Red, Green and Blue 2.00 mm

This set of color filters are each exactly the same thickness which is essential for color imaging.

  • Hoya ND Filter 2.5 mm
  • Hoya Orange Filter 1.94 mm
  • Hoya Red Filter 1.98 mm
  • Hoya Blue Filter 2.18 mm

I hope that this information will be of some value to those considering color imaging.

rule

Subject: Tricolor Filters for CCD--Part 1 of 3     Top

From: Michael Hart

John Hopper wrote:

> I have added to my web site a long description, with pictures, of the
> Optec filter slider. It is under Attachments/Filters/Optec. I have also
> added some photos to the article on focal reducers that show the
> Lumicon, Meade and Optec reducers in some detail.
> Finally, I have just read a fine book, called Pluto and Charon. It is
> reviewed briefly in my bibliography section under Bibliography/History
> Doc G
> > Doc, Thanks for posting this practical guide to filters and tricolor! I was
> looking for information just like this. I have a question and comments:
> Has anyone else on the list used different filter combinations with good results?

I believe over time, I have posted considerable information about this based on manufacturer's data, transmission curves and actual use. Some of that information is on Doc G's web site under Tricolor Imaging. I have probably tried every conceivable combination you can imagine to test the viability of various combinations that remap wavelengths to compensate for non-linear CCD chip response. I have also tried various dicroic filter sets made for CCD imagers as well. Some use creative ways to improve response at problematic wavelengths. I believe nothing in the following comments contradict Doc G's filter information. I have tried subtractive dicroic sets (CMY) which are quite useful for camcorders at high light levels with custom automatic color compensation circuitry built in, but somewhat marginal with software based traditional CMY color models. The reason is likely the nature of the subtractive color filter to pass all light minus one of the additive colors. Thus, C (cyan) is green and blue minus red, M (magenta) is red and blue minus green, and Y (yellow) is red and green minus blue. As expected, a filter combining two colors would have excellent throughput- arguably excellent for the ST-7/8 with built in autoguider chip. Unfortunately the nonlinear response of the popular Kodak chips in these cameras considerably complicates the process of reproducing the original image.

Thus far, I have not seen a software based modified CMY color model that addresses the fundamental problems of accurately restoring the original wavelengths that accounts for the frequency response differences among CCD chips operating at low light levels. I have used photometric filters (BVR) as well. After careful study of transmission curves, I believe there are no significant shortcuts to good tricolor results in substituting wavelengths for those more readily available and/or more efficiently recorded for the desirable true visible wavelengths. The bottom line-- after a considerable amount of tricolor imaging over the last several years, I have found no filter set I prefer better than using straight RGB filters because the results are predictable and precisely match RGB color models.

This is not to say RGB color filters do not require balancing, but I believe remapping or shifting visible wavelengths is best left for scientific endeavors such as displaying far infra-red, visible and ultraviolet wavelengths simultaneously. One can deviate a bit from true RGB filters, but the closer, the better. > Are the "filter thickness" values you give the actual thickness of the glass? Knowing the thickness of the filter is important to maintaining focus between filters. Mixing a 2 and 3 mm filter means the telescope must be refocused between shots, though, using micrometer/DRO readouts will minimize this inconvenience.

> I think your discussion of the 80A as giving good aesthetic results with short
> exposure times was extremely interesting. I'd done a little thinking about
> this myself, and came to the conclusion that a good "tricolor" setup, now that
> most astrophotographers have Photoshop or other good software for combining
> the multiple images any way they'd like, would be to use something like
> 80A/(either "no exposure", "no filter" or some type of light or medium
> green)/85B and then experiment as to exposure ratios and how to combine the
> two or three images using various fudge factors summing them. The exposure
> ratios should be closer to 1:1:1 (or maybe 1:0:1) than with cutoff filters.

I do not generally prefer the results of using a pale blue (80 A) filter, though John Hoot has suggested it's possible use. The leakage from other wavelengths gives somewhat unpredictable, though "colorful" results. Attaching a camera lens adapter to a CCD camera and shooting terrestrial scenes may be a good way to illustrate why. I do not generally recommend such techniques as you describe above for a number of reasons as related to color balance, S/N ratios, kernel filters and more. For example, non-linear CCD chip response in the visible wavelengths often requires long blue exposures to produce an adequate S/N ratio. Using an 80A passes so much non-blue light that the results are unpredictable. Of course all that non-blue leakage is misrepresented as blue in the RGB color model. We can approach a 1-1-1 ratio for the popular 0400 chip using 770 nm, 650 nm and 550 nm that may produce visibly balanced star colors.

When mapped to a RGB color model, the result is wonderfully blue images, because all that is really green is remapped to blue. Imagine the above terrestrial experiment and the resulting blue grass. We can also synthesize a third color from two others with mixed results. It is difficult to use programs such as Photoshop to compensate for the lack of adequate color information with predictable results. If we don't care about color accuracy, we can use an image processing program to do just about anything with mixed results. However, armed with accurate color information, predictable results are not difficult and arguably worth the effort needed for longer exposures. In the case of RGB, we want signals as close as possible to accurately representative RGB wavelengths.

> The 80 series moves the color temp upward with shallow bandpass
> characteristics, the 85 series moves it downward an equivalent amount with
> shallow bandpass, "no filter" leaves it alone, and hopefully there's a green
> which when added in some proportion will smooth out the frequency response of
> the sum. To move the color temp, these filters have nice, bell-shaped
> response curves centered on different wavelengths, somewhat like bandpass
> filters but not nearly so steep at the sides and a bit less flat on top.
> While it's not as rigorous-sounding an approach as using steep, perfectly
> matched bandpass filters to do a true separation, it throws away fewer
> photons, and would very likely give more accurate color rendition to the eye
> if my photographic instincts are correct. The reason for this is that a
> response curve shaped like a bell or sine wave is more forgiving than square
> waves which don't have their cuts at the exact same frequency or aren't
> perfectly square (none do and none are!!) It's much like the speaker cross--
> over design problem before and after arbitrarily steep electronic crossovers
> became available.

Reproducing true color from individual additive colors is best done using sinusoidal bandpass with overlapping curves. The weighted center of the curve should be quite close to the actual wavelength represented. Dicroic filters are quite efficient but have transmission curves with rather steep cutoffs and flat tops. Camcorders use dedicated circuitry not found in CCD cameras to compensate. However, camcorders typically see adequate S/N ratios, low dynamic ranges and low resolution. Your audio analogy is good, though the dicroic filter can in fact have a greater photon throughput at a particular wavelength than absorption type filters. Overlapping transmission curves are important because the overlap (mixing of colors) is what produces even color representation in the RGB color model.

We cannot forget that most CCD cameras are sensitive to the infrared leakage of most color filters. An infrared (hot mirror) filter should be used to stop undesirable infrared leakage.

> At least with electronics, you can twiddle with the
> parameters more easily than you can by looking for exactly the right color
> glass. Murphy's law tells us that the downward spikes due to mismatch on
> steep bandpass filters will fall at some important emission line of our
> subject, and the upward spikes at the emission line of our town's streetlights
> If we remove any narrow wavelength in the visible spectrum, it effects the image.

That effect may be undesirable enhanced with image processing techniques such as background and range adjustments (levels), saturation enhancement, etc., limiting the ultimate potential of our results.

> The frequency response curve of this approach might not be as flat at most
> frequencies as using bandpass filters, but its lack of horrible atrocities at
> crossover frequencies is a major advantage. It also makes the ratios of the
> three exposure times less critical to maintaining a good overall response curve.

I presume you are stating an advantage of traditional colored filters whose transmission curves produce smooth tops, gradual slopes and good crossovers as compared to dicroic filters which function much as an interference filter, reflecting undesirable wavelengths and passing the desirable ones. Fundamentally, I believe traditional colored filters have an edge at producing good results without the aid of dedicated circuitry, though in practice, a CCD image without adequate S/N ratio in one color arguably produces poorer overall results. The exposure ratios used are not really that critical by themselves. We are looking for good initial S/N ratio, then initial color balance which may be adjusted as needed.

-------------------------------------------------------

Subject: Tricolor Filters for CCD--Part 2 of 3     Top

From: Michael Hart

Jon A. wrote:

> I have tried subtractive dicroic sets (CMY) which are quite useful
> with built in autoguider chip. Unfortunately the nonlinear response
> of the popular Kodak chips in these cameras considerably complicates the
> process of reproducing the original image.
> Thus far, I have not seen a software based modified CMY color model
> that addresses the fundamental problems of accurately restoring the
> Michael and John,>
> Great stuff. Have you monitored Al Kelly's work on the CCD list with CMY?
> How would you judge this effort?

I have not monitored Al Kelly's work directly, though I have had inquiries such as yours. I have examined images sent me using current CMY color models and have tested CMY filters independently. CMY filters are not new which begs the question- why haven't they been used before? The answer is, they have. Emulsion base print films use the CMY process. Camcorders use CMY (often with a variation of the yellow filter) and of course printers use CMYK (K is for black). Mixing all colors using print inks does not produce a very dark black, so printers use a black ink. Using CMY, the combing of wavelengths makes accurate color reproduction of the original scene quite difficult in low light with a non linear CCD chip, though achieving color balance is not very difficult. While CMY filters DO pass more photons, all the light in the blue wavelengths are in the cyan and magenta filters. If we balance CMY colors against a gray card at the effective temperature of the sun (5770 degrees K., spectral type G2), the cyan and magenta filters pass all the blue wavelength to accurately represent the desired blue emission present in the original scene.

However, the CCD detector often needs a disproportionate amount of blue to compensate for it's poor response in the shorter wavelengths. This is where CMY color models fall short, because the total exposure time of actual blue wavelengths contained in the COMBINED exposures of cyan and magenta to achieve a color balanced exposure is less than that used to produce a single color balanced blue exposure in the RGB color model. The result is the remapping of other wavelengths to blue which do not accurately represent the original scene. I believe the possible solution for CMY color accuracy is to have a CMY color model written for specific CCD chips. Even CCD chips that have a better linear response such as the TC series are not likely to produce more accurate color than straight RGB filters whose transmission curves produce smooth tops, gradual slopes and good crossovers. Using a light blue 80A filter for the blue channel is also is very efficient because it passes non-blue wavelengths that the CCD chip can readily record- all the blue light and a lot of red and green light. I have described a process of using IR, Red and Cyan filters with a resulting throughput exceeding CMY, however, this also does not produce accurate visible color, though the colors (channels) are well balanced.

> I'll be attempting color work for the
> first time this summer and have heard the siren call of more photons,
> but my current plan is use my 616 and its stock filters. Al and others
> have also been promoting an NRGB method. I think I follow it as generally
> a long mono exposure which gets colored by shorter ones. Are you
> thumbs ups or down?

Many times, simplest is best, like plain old RGB? LRGB and various synonyms to this acronym are not particularly new, though the process may be new to some in CCD imaging. Jerry Lodriguss proposed using the Photoshop CMYK color model 2-3 years ago for tricolor film imaging. There has been a lot of new terms tossed about recently such as WRGB (White RGB) MRGB (Monochrome RGB), LRGB (Luminance RGB) quadcolor (doesn't really use 4 colors) and more. I have no idea what "NRGB" stands for. All (and likely NRGB) are essentially based on a 50 year old television process which recognizes the eye's inability to discern detail in color. As details becomes very small, all the eye can discern are changes in brightness. Beyond a certain level of detail, color cannot be distinguished, and the human eye, in effect, becomes color blind.

The color television signal is composed of the luminance (higher resolution black and white) and chrominance (low resolution color). Doc G and myself describe details of this process on Doc G's Info Page under Tricolor Imaging. Essentially, substituting a higher resolution luminance image for the one contained in a lower resolution color image produces what appears as a higher resolution color image. As a result, the color accuracy is maintained while displaying details of the luminance signal. If we increase the luminance signal enough, we can mask over irregularities in color balance, color filter choices, and the color model used.

In conclusion, I believe the RGB color model along with color filters that reject (either through absorption or reflection) undesirable frequencies produce the best, most predictable, and most accurate CCD color results readily adjusted to a variety of CCD chips with non-linear color response. The use of the color television process designed to conserve bandwidth, regardless of what it is called, can play a useful role in RGB tricolor imaging with minimal effect on color accuracy. I believe we should strive for color accuracy if we are representing images as true color, then if a colorful result is desired, the use of filters that leak other wavelengths into the represented color channel may be considered as can a tint to a converted monochrome image.

However, we must keep in mind the results misrepresent the original emissions and may mask important details.

---------------------------------------------

Subject: Color Filters for CCD Imaging--Part 3 of 3     Top

From: Ric Ecker <rleckera_tjuno.com>

This is some information that I got from John Hoot for filters used for CCD imaging. He suggests a basic set called the Easiest Set and the Fidelity Set. The easiest set to image with is: #23A red; #56 green and the #80 blue. This set is used by John on the Pictor 216 XT because of its exposure times. This set gives reasonable images at almost a 1:1:1 exposure times.

The Fidelity Set consist of: #23A red; #58 green; #38 blue. This set give the best results but takes considerable exposure time with the green and blue filters. This filter set will give reasonable color balance and good signal to noise (S/N) ratios.

The best way to achieve color balance is to defocus a 2nd or 3rd magnitude G2 star and make a 1 minute exposures through each filter and record the brightness reading. Than take the inverse of the average brightness to figure out the proper exposure ratio for each image. You want to use the proper exposure times so that faint background objects are recorded in all three colors. Just changing the gain and contrast on equally exposed image will not record faint objects in the weak exposures. If you don't, faint objects will all shift towards the red.

Deep sky objects can be taken without a filter wheel although they are a nice piece of equipment to have. Using a device like the Optec Maxfield reducer which can use 48mm filters and has a registered fit helps in taking color imaging. Making sure all filters are the same thickness helps in keeping each color in focus. Making sure each filter fit the holder and hold the filter in the same fashion as the other filters also helps with each image having the same focus.

Ric has investigated John Hoot's work and posted the results here. I am pleased that John uses the terms "Easiest Set" and "Fidelity Set" to distinguish color filter sets that remap colors from those that only allow colors representative of the actual wavelengths. Since color balance

is related to color temperature, I would suggest the use of a spectral class G2 star as the calibration source as well. I use the light of the full moon well above the horizon and image a neutral gray test card (from a camera store) noting the light intensities recorded to determine exposure ratios. -- Michael Hart

rule

Subject: Filter Wheel Recommendation --part 1 of 3    Top

From: Gregg Ruppel <ruppelgla_tslu.edu> Date: Jan 2003

From: Michael Blaber
> I am considering the purchase of a filter wheel and am trying to decide between:
> 1. Optec Intelligent Filter Wheel
> 2. Adirondack Custom Filter Wheel
> 3. SBIG CFW-8 Filter Wheel
> If anyone has any experience with these,
> and can provide any helpful comments, I would appreciate it.

Mike: I have the True Technology Custom Filter wheel which I believe is what you referred to as the 'Adirondack' wheel (sold by Adirondack Video Astronomy in the US). I have been using the True Tech wheel for about 9 months and it has worked very well. It can be operated manually with the included handbox and is also supported by several different software packages. It has 2" attachment ports and can use wheels with different size filters (1.25 or 2") so you can adapt to various different cameras. True Technology also has a variety of adapters so that you can hang it onto almost anything. The wheel does hang up on rare occasions, but there is a reset button on the hand control that quickly fixes the problem; this can also be done by re-initializing the device remotely. I use the TT wheel with my MX916 camera on an LX200. You can see a picture of the device on my web site at:
<http://www.biz1.net/~ruppelgl/tech.htm>

--------------------------------------------------------------

Subject: Filter Wheel Recommendation --part 2 of 3

From: Doc G

I have used the Optec products for some years now. They are excellent and operate exactly as advertised. That includes the TCF-S, the IFW and the older filter slider.

------------------------------------------------------------

Subject: Filter Wheel Recommendation --part 3 of 3    Top

From: Gene Horr <genehorra_ttexas.net>

If you have an SBIG camera then you don't need to add an extra data line with the CFW-8. Disadvantage is that you get occasional filter errors at certain orientations (generally not an issue with SCTs). But it is allegedly fixed by a 3rd party cover plate.

rule

CCD 2     MAPUG-Astronomy Topical Archive   AstroDesigns   Top   MAPUG-Astronomy.net