Return to Home Index
Return to Astrophotography Index



The selection of good tricolor filters is quite important in achieving predictable results. Current sets available as a package from several sources are typically compromised a bit for a number of reasons including the exposure times required to obtain a color balanced image. I believe over the last few years, I have read considerable information and advice about tricolor filters based on manufacturer's data, transmission curves and actual use by others and myself. Further important information concerning color filters may found on Doc G's web site under "Color Filters Used for Three Color Imaging"


The promise of excellent tricolor film images with less exposure time using CMY filters is certainly appealing. Since color film uses the CMY process, it would seem quite possible that CMY tricolor filters should work quite well with black and white film. Without a doubt, one will obtain color when the individual CMY channels are combined. While the promise for a short cut to images is likely fulfilled, the question as to how well these filters work at faithfully reproducing the original scene remains in my mind. Perhaps it is because the color model for this application is unperfected, however, the unexpected colors produced seem to require a considerable amount of fiddling to compensate for the uneven response of the film or CCD chip and the CMY filters.

I have not directly monitored those using CMY tricolor filters, though I have had inquiries about the process and results. I have examined images sent me using current CMY color models and have tested CMY filters independently. CMY filters are not new which begs the question- why haven't they been used before? The answer is, they have. Emulsion base print films use the CMY process. Camcorders have used CMY in micro-lens overlays for some time (often with a variation of the yellow filter). Often, a custom DSP (for a given chip) is used to compensate for known chip anomalies. Kodak has chosen RGB overlays with good reason for many of their consumer cameras. The RGB overlay improves color fidelity using a CCD chip, which has a rather uneven response. Professional color cameras and camcorders get around much of this by using three separate imaging chips to provide the very best in color fidelity over a wide range of imaging conditions.

Of course printers use CMYK (K is for black). Mixing all colors using print inks does not produce a very dark black, so printers use black ink. Using CMY, the combing of wavelengths makes accurate color reproduction of the original scene quite difficult in low light with a non-linear CCD chip, though achieving color balance is not so difficult. While CMY filters DO pass more photons, all the light in the blue wavelengths are in the cyan and magenta filters. If we balance CMY colors against a gray card at the effective temperature of the sun (5770 degrees K., spectral type G2), the cyan and magenta filters pass all the blue wavelength to accurately represent the desired blue emission present in the original scene.

However, the CCD detector often needs a disproportionate amount of blue exposure (longer) to compensate for it's poor response in the shorter wavelengths. This is where CMY color models may fall short, because the total signal of actual blue wavelengths is COMBINED with other frequencies contained in the cyan and magenta filters. The result is the re-mapping of other wavelengths to blue, which do not accurately represent the original scene. I believe the possible solution for CMY color accuracy is to have a CMY color model written for specific CCD chips. Even CCD chips that have a better linear response such as the TC series are not likely to produce more accurate color than straight RGB filters whose transmission curves produce smooth tops, gradual slopes and good crossovers. Using a light blue 80A filter for the blue channel is also is very efficient because it passes non-blue wavelengths that the CCD chip can readily record- all the blue light and a lot of red and green light. I describe a process in this article of using IR, Red and Cyan filters with a resulting throughput exceeding CMY however; this also does not produce accurate visible color, though the colors (channels) are well balanced.


My film of choice for CMY filters would likely be gas hypered Technical Pan 2415 since this film has arguably the best dynamic range (contrast) potential (depending on the developer and paper) of any black and white film readily available to the amateur. Unfortunately, this film has a rather uneven green response. In the CMY color model, this green record is a combined with other wavelengths in both the cyan and yellow records. To restore the original scene, we must use a very clever algorithm or at the very least- fiddle with a standard CMY color model to compensate for the amount of green really contributed. What often happens is that we find our algorithm may produce good results in some objects and very unexpected results in others.

I have probably tried every conceivable combination of filters one can imagine to test the viability of various combinations in an effort to effectively re-map wavelengths to compensate for non-linear CCD chip and film response. I have also tried various dichroic filter sets made for CCD imagers. Some use creative ways to improve response at problematic wavelengths. See Doc G's filter information which adds considerably to this discussion.

I have tried subtractive dichroic sets (CMY) which are quite useful for camcorders (and in color film emulsion) at high light levels with custom automatic color compensation circuitry built in, but have proved somewhat marginal (for me) with software based traditional CMY color models. The reason is likely the nature of the subtractive color filter to pass all light minus one of the additive colors. Thus, C (cyan) is green and blue minus red, M (magenta) is red and blue minus green, and Y (yellow) is red and green minus blue. As expected, a filter combining two colors would have excellent throughput- arguably excellent for the ST-7/8 with built in autoguider chip. Unfortunately the nonlinear response of the popular Kodak chips in these cameras and the filters themselves considerably complicate the process of reproducing the original scene.

Thus far, I have not seen a software based modified CMY color model that addresses the fundamental problems of accurately restoring the original wavelengths that accounts for the frequency response differences among CCD chips and film operating at low light levels. I have used photometric filters (BVR) as well. After careful study of transmission curves, I believe there are no significant shortcuts to good tricolor results in substituting wavelengths for those more readily available and/or more efficiently recorded for the desirable actual visible wavelengths. The bottom line-- after a considerable amount of tricolor imaging over the last several years, I have found no filter set I prefer better than using straight RGB filters because the results are predictable and a better match RGB color models. I must say many of the better known images produced from observatory plates use BVR (blue visible red) filters. One could argue RGB filters produce more accurate color than BVR filters especially when noting the descriptions of emission lines represented in observatory images may not appear as the expected color.

This is not to say modest balancing of RGB color filters is not required. Still, I believe re-mapping or shifting visible wavelengths is best left for scientific endeavors such as displaying the infrared, visible and ultraviolet wavelengths simultaneously. One can deviate a bit from true RGB filters, but the closer, the better. Knowing the thickness of the filter is important to maintaining focus between filters. Mixing a 2 and 3 mm filter means the telescope must be refocused between shots, though, using micrometer or the JMI DRO (digital read out) or equivalent will minimize this inconvenience.


One example of re-mapping is the use of the pale blue (80A) Wratten to shorten exposures with CCD cameras having poor blue response. I do not generally prefer the results of using this pale blue (80 A) filter for this application, though John Hoot has suggested its possible use in what he has termed the "Fun Set". However, John also recommends a "Fidelity Set" which I believe is much better. The leakage from other wavelengths gives somewhat unpredictable outcome in the "Fun Set" while likely to producing "colorful" results. Attaching a camera lens adapter to a CCD camera and shooting terrestrial scenes may be a good way to illustrate why color balance, S/N ratios, kernel filters and more are skewed by these filters. For example, non-linear CCD chip response in the visible wavelengths often requires long blue exposures to produce an adequate S/N ratio. Using an 80A passes so much non-blue light that the results are quite unpredictable. One can easily see why this is so with the use of a MacBeth Color Checker or other color chart while looking through the filter.

John Hoot's "Fun Set" consists of the #23A red, #56 green, and the #80 blue. On a Pictor 216 XT (TC chip) his exposure ratios are close to 1:1:1. John Hoot's Fidelity Set consists of the #23A red, #58 green, and #38 blue. John feels this set gives the best results but needs considerably longer exposure times with the green and blue filters. He reports this filter set gives reasonable color balance and good signal to noise (S/N) ratios.

I am pleased that John uses the terms "Fun Set" and "Fidelity Set" to distinguish color filter sets that re-map colors from those actually representing specific frequencies. Since color balance is related to color temperature, I would suggest the use of a spectral class G2 star as the calibration source is quite right, though reasonably bright stars of this particular type are not particularly plentiful (I can only think of four). I use the light of the full moon well above the horizon and image a neutral gray test card (from a camera store) noting the light intensities recorded to determine exposure ratios. John reports a similar tactic except he images the moon without the gray card.

Of course all that non-blue leakage is misrepresented as blue in the RGB color model. Another way to approach a 1-1-1 ratio for the popular 0400 chip is to use filters at  770 nm, 650 nm and 550 nm. These are likely to produce visibly balanced star colors. However, when using these filters, and re-mapping their frequencies to a RGB color model, the result is wonderfully blue images, because all that is really green is remapped to blue. Imagine a terrestrial experiment and the resulting blue grass. We can also synthesize a third color from two others with mixed results. It is difficult to use programs such as PhotoShop to compensate for the lack of adequate color information with predictable results. If we don't care about color accuracy, we can use an image-processing program to do just about anything with mixed results. However, armed with accurate color information, predictable results are not overly difficult and arguably worth the effort needed for longer exposures. In the case of RGB, we want signals as close as possible to accurately representative RGB wavelengths, I think.


Since many amateurs do not have convenient access to a densitometer (found in a good photo lab) and related equipment, it is useful to provide a simplified method to approximate exposure ratios with minimal equipment. Approximate exposure ratios may be determined by photographing a gray step card in moonlight. The moon should be quite near the meridian to minimize atmospheric extinction. The exposure should be long enough to be representative of the film's response characteristics during long exposures. In many cases, a five-minute exposure produces adequate representative results of longer exposures. A good camera lens may be used to make this part easier, though I have used a telescope for this purpose. The target was set at 50-100 feet. The use of the imaging scope removes any potential frequency attenuation or aberrations in the camera lens, but is likely a bit more difficult to accomplish.

Next, ask the lab for prints without any correction. Explaining to the lab what you are doing may be helpful. The resulting prints will show gray steps that are now likely different shades from each other and the test card. Match up the shades of the prints. If each shade represents 1/2 stop, one can then extrapolate exposure lengths required to produce the same gray shade for each filter. Check your results by again photographing the gray step card at the calculated exposure ratios.


The use of a MacBeth Color Checker will give you a good idea whether your balanced filters can really reproduce the original scene. Again, it is a good idea to use exposure times of five minutes or longer to better represent long exposure characteristics. If you find you cannot reproduce these colors quite accurately, your tricolor images will produce unexpected results as well. If you are using a monitor for comparing colors, be advised that results are likely to vary considerably from monitor to monitor.


Reproducing true color from individual additive colors is best done using a sinusoidal bandpass with overlapping curves. The weighted center of the curve should be quite close to the actual wavelength represented. Dichroic filters are quite efficient but have transmission curves with rather steep cutoffs and flat tops. These steep curves and flat tops can enhance less desirable results when using typical image processing techniques such as background and range adjustments (levels), saturation enhancement, etc., limiting the ultimate potential of color fidelity. We cannot forget that most CCD cameras are sensitive to the infrared leakage of most color filters. An infrared (hot mirror) filter should be used to stop undesirable infrared leakage.

Traditional (Wratten) colored filters are absorptive. These transmission curves produce smooth tops, gradual slopes and good crossovers as compared to dichroic filters which function much as an interference filter, reflecting undesirable wavelengths and passing the desirable ones. Overlapping transmission curves are important because the overlap (mixing of colors) is what produces even color representation in the RGB color model.

Fundamentally, I believe traditional colored filters have an edge at producing good results without the aid of dedicated circuitry, though in practice, a CCD image without adequate S/N ratio in one color arguably produces poorer overall results. If the initial exposure ratio used is not quite right, we will still get a fairly good color image, however, we will need to adjust the levels to partially compensate. We cannot compensate for what was not recorded. I must say no exposure ratio or filter set is perfect for all imaging conditions. I use much longer blue exposures for objects away from the zenith due to atmospheric extinction which absorbs blue and some green light.


All current models of adding a monochrome image to a tricolor image are essentially based on a 50 year old television process used to this day in our color television sets which recognizes the eye's inability to discern detail in color. As details become very small, all the eye can discern are changes in brightness. Beyond a certain level of detail, color cannot be distinguished, and the human eye, in effect, becomes colorblind.

The color television signal is composed of the luminance (higher resolution black and white) and chrominance (low-resolution color). Essentially, substituting a higher resolution luminance image for the one contained in a lower resolution color image produces what appears as a higher resolution color image. As a result, the essential color information is retained while displaying the details in the luminance signal. If we increase the luminance signal enough, we can mask over irregularities in color balance, color filter choices, and the color model used. This method has been used in the MX5-C CCD camera for quite some time.

LRGB and various synonyms to this acronym that essentially combine a monochrome image with a color image are not particularly new, though the process may be new to some in CCD imaging. Jerry Lodriguss proposed using the PhotoShop CMYK color model 2-3 years ago for tricolor film imaging. There has even been new terms added such as WRGB (White RGB) MRGB (Monochrome RGB), LRGB (Luminance RGB), Quadcolor (doesn't really use 4 colors), NRGB (Neutral RGB), and likely more. All are slight variations to the process described above.


I believe the RGB color model along with RGB color filters which reject (either through absorption or reflection) undesirable frequencies produce the best, most predictable, and most accurate CCD color results readily adaptable to a variety of CCD chips possessing a non-linear color response. The use of the standard color television process designed to conserve bandwidth, regardless of what it is called, can play a useful role in RGB tricolor imaging but is not a panacea. I believe we should strive for color accuracy if we are representing images as true color, then if a colorful result is desired, the use of filters that leak other wavelengths into the represented color channel may be considered as can a tint to a converted monochrome image. Still, we must keep in mind the results may misrepresent the original emissions as well as mask important details. Selecting the right color filters for the film or CCD chip used and determining a good exposure ratio for the selected set is an important step to good imaging results.
Michael Hart
Husen Observatory

Go to Home Index for Doc G's Info Site