This is adapted from an answer I posted to a question on Quora.  The original poster had asked which would be better for astrophotography, a Canon D6 MkII or a Canon 90D.

Here's my answer with a few minor edits.


Which one is better for astrophotography, 90D or 6D Mark II?

It depends on what kind of astrophotography you’re doing. Astrophotography isn’t just one thing, but a banner term for a lot of different types of imaging. And, as with any kind of photography, there’s no one-size-fits-all solution. Actually, with AP, specialized equipment is even more important. While a typical off-the-shelf consumer-level DSLR can be used for portrait photography, landscape photography, and most other kinds of photography, and do so competently (though not necessarily ideally), the same is not true for different kinds of AP.

I like to break things down into four key categories based on field of view and exposure:

Wide field is typically done with either a camera lens or short focal length telescope (essentially a camera lens IS a type of telescope, though with some specific features). Narrow field is nearly always going to be done with a telescope.

The field of view you get with a given lens or telescope depends on the focal length of the instrument along with the dimensions of the image sensor. Pixel count is not important to this, just physical measure of the image sensor. For example, a standard full-frame image sensor has a width of about 35–36 mm, while an APS-C sensor has a width of around 22–25mm. Connected to the same lens or telescope, the full-frame camera will have a wider field of view than the APS-C.

For example, if you were to use a Canon 200mm f/2.8 lens with the 90D, which is an APS-C sensor, you’d get a field of view about 3.7° wide b about 2.4° tall, while the 6D, which is a full-frame sensor, gives you a field nearly 6° by 4° with the same lens.

Narrow-field is pretty much always going to use a telescope, and the longer the focal length, the narrower the field of view. As demonstrated above, the sensor size is also important. For narrower-field, like planetary imaging, smaller sensors are generally a better option. If you were to buy a camera that’s specifically intended for planetary imaging, you will find most of those have very small sensors. For example, the size of the sensor for the ZWO ASI290MM is 5.6 mm by 3.2mm. If you were to use that with the same 200 mm Canon lens mentioned above, the field of view you’d get is only about 1° by 1/2° (actually, a little smaller).

If you want to capture an image of M31, the Andromeda Galaxy, which has an angular size about 3° wide, you need to make sure your camera and lens/telescope combination is capable of that field. On the other hand, if you use that same combination to try to capture an image of Jupiter, which at it’s maximum angular size is less than 1 arcminute in diameter (50.8 arcseconds, an arcsecond being 1/60th of an arcminute, which is 1/60th of a degree), then Jupiter would appear as a tiny dot on a large field of black nothingness. If you want to see an example, this link will show you an estimation of the view you’d get of M31 taken using a Canon 60Da and a 200mm f/2.8 lens: M31. Here’s the view you’d get of Jupiter using the same equipment: Jupiter. (Note: I’d originally done this with the 6D specs, but the image of Jupiter it showed was entirely blank).

Now, let’s say we exchange that lens for a telescope, say a common 8 inch f/10 Schmidt-Cassegrain like the venerable Celestron C8. Here’s the image you’d get of M31 with the DSLR: M31 in C8 with 60Da. Not a very good option for M31, but how about Jupiter: Jupiter in C8 with 60Da. Here, Jupiter is noticeably larger and you can start to make out the cloud bands. But still pretty tiny. What if we switched to the ZWO planetary imaging cam: Jupiter in C8 with ASI290. THIS is a good combination for Jupiter, or other planets.

When it comes to images that require long exposures, there’s some other considerations. For these images, the objects being captured are so faint that it requires we keep the camera shutter open for an extended period of time. While most conventional photography is done with shutter speeds measured in fractions of a second (daylight outdoor photography often uses a shutter speed of about 1/1000th of a second, indoors with good lighting, 1/125th of a second is pretty common, and most flash photography uses a shutter speed of 1/90th to 1/60th of a second), most deep sky (i.e. not the moon, planets, or other solar system bodies) imaging requires exposure times measured in minutes, even hours.

The problem here is that stuff up there moves, and while it may not seem to move rapidly to the naked eye, as you narrow the field of view, that changes pretty fast.

The Earth rotates 1 time in 24 hours. This means anything you see in the sky will complete a complete circle around the Earth in that time and end up in the same place as it was 24 hours before. Actually, because the Earth is also orbiting the sun, this isn’t quite right. Due to our orbit, the object will move a little more than 360 degrees in 24 hours. The difference is about 4 minutes, but we’ll ignore that for the purpose of simple estimation.

If you divide 360 degrees by 24 hours, you find that an object will complete 1/24th of a circle, or 15 degrees, in 1 hour. If we divide that by 60 minutes we find that the object will have moved 15 arcminutes in 1 minute, and further dividing by seconds shows us the object moves 15 arcseconds per second. This doesn’t seem like a lot, perhaps, but when you compare that distance to the field of view you get through your camera/lens/telescope combination.

Let’s look at that 90D. It has a resolution of 6,960 by 4,640 pixels. If we try shooting through that C8 I mentioned before, you find that the resolution of each pixel is about 0.32 arcseconds per pixel. If you are shooting a 1 second exposure, this means that the object will move almost 47 pixels from the time the shutter opens until it closes. If you have a star that’s 1 pixel in size, it will end up as a streak that’s 47 pixels long.

This is how you get star trail pictures: you set the camera up on a tripod, usually aimed toward the pole, and do an exposure of a few minutes. You end up with an image that shows the path the stars appear to take as the Earth rotates.

But most of us don’t want trails, we want pictures of galaxies, clusters, and nebulae, so this is a problem.

The fix for this problem is to use a type of telescope mount that “tracks” the sky as it moves. For this, you really need what we call an equatorial mount, which is aligned with the north or south pole (depending on which hemisphere you’re in) and provides motion that keeps the camera/telescope pointed in the same direction.

Mounts like this typically are not cheap. The cheapest ones that I’ve seen that could really be recommended for AP start in the neighborhood of about $500, and that can only handle the camera and a very small telescope or relatively small lens (I’d say no more than 300 mm focal length, if that). As the field of view decreases in size, the weight of the camera and lens/telescope payload increases, or the exposure time increases, the demands on the mount increase, requiring more and more accurate and capable mounts. Of course, this means the price tends to increase rapidly as well.

Exposure time is a major part of this. The longer the exposure time you need, the more accurate and precise the mount needs to be. So one way we can reduce the demand on the mount is to decrease exposure time. We’re still going to be talking about minutes, usually, but we can still make it easier on ourselves.

When you get down to it, the image sensor is essentially counting photons. As a photon of light reaches the photosensitive material of the image sensor, it increases the charge on the semiconductor material a tiny amount. When the exposure completes, the electronic brain on the sensor counts up how much charge each pixel has and converts that to a data set that describes the light the sensor encountered. Most digital cameras these days use CMOS sensors, which are 14-bit image sensors. 14 bits allows for a numeric value between 0 and 16,383. In this case 0 would be black and 16,383 would be white, and any value in between is a shade of gray (let’s ignore color for the time being, though it works the same way). When you capture the image, the image file basically will contain a value between 0 and 16,383 for each pixel, which the computer can then reconstruct into an image.


Ok, quick explanation of color: the image sensor is only sensitive to light, not to specific color. The sensor is counting photons, essentially, and doesn’t give a rat’s rear-end what color they are. So to get a color image, we filter the light. If you place a red light filter over the sensor, you’ll only be counting photons of red light. To capture a color image, we then need the red, green, and blue light to mix together. The way a color image sensor does this (what we refer to as a one-shot-color or OSC sensor) is to have a grid of pixel-sized filters mounted over the pixels of the sensor. This is called a Bayer pattern. In most cameras you end up treating every 2x2 group of pixels as a single unit here and of these four, there is usually one red, two green, and one blue filter over the pixels. So when the light comes in, one of the pixels is only seeing red, two only green, and one only blue. These values are then mixed together to create color. These 4 pixels aren’t actually treated as a single pixel, but the adjacent color values are combined or blended with the native color of the given pixel to assume the color for that individual pixel. They are then saved into the image with three 14-bit values per pixel, which means the uncompressed raw image file will be about 3 times larger. But discussing the nature of these files is a topic for another time..

Ok, so let’s say you have a camera mounted to a telescope and take a picture. Let’s just throw out some hypotheticals here and say the telescope has a focal length of 500 millimeters and an aperture of 100 mm. This gives it a focal ratio of f/4(as the focal length is five times longer than the aperture diameter). Let’s say now that you’re capturing an image of an object that’s roughly circular in size. With your image sensor, whatever camera you’re using, the image covers a spot on the sensor that’s roughly 100 pixels wide. If you do the math to solve for area, you’ll find that this means it covers a total of about 7,854 pixels. Now, let’s imagine that you do a 1 minute exposure and in that time, the average pixel captures about 2000 photons worth of light. If you add it all up, this means a total of about 15,708,000 photons were captured.


Now, let’s say you change to a telescope with a focal length of 1,000 mm, twice as long, but the same aperture. Your focal ratio is now f/10. You try to do another 1-minute exposure with the same camera. The doubled focal length means the field of view is cut in half (which makes it appear that you’ve doubled magnification). Instead of covering a spot 100 pixels wide, the object covers a spot 200 pixels wide. If we solve for area, we now find that the area of the spot is 31,416 pixels (when you double the diameter of a circle, you square the area).

Now here’s where things become problematic: the amount of light received is dependent upon the aperture of the telescope, not the focal length. If you have two telescopes of the same aperture regardless of focal length, the number of photons each will collect will be the same (well, of course, it’s not going to be 100% the same, but close enough for our purposes), so a total of about 15,708,000 photons were captured, but now they’re spread over 31,416 pixels, for an average of about 500 photons per pixel. The image is 1/4 as bright. It’s larger, but it’s much fainter. To get the same level of exposure, you need to increase exposure time by a factor of 4 - 4 minutes to get what the shorter scope got in 1. While we have to use long exposures to capture the light we need, we have to balance that with the abilities of our camera, mount, and lens or telescope. The longer the exposure time, the more likely we are to run into problems, whether that be with small errors in our mount’s tracking or guiding, minor errors in alignment, or something like an airplane, satellite, or shooting star passing through the field of view during the exposure.

This is why astrophotographers tend to prefer telescopes and lenses with shorter focal ratios. We refer to a shorter focal ratio as a “fast scope” for just this reason.

And this is where another issue comes into play: pixel size.

The pixels on the 6D MkII are about 5.67µ in size (that’s 0.00567 mm). Those on the 90D are 3.2µ in size (0.0032µ). Those on the 6D Mk II then have 0.032µ^2 of surface area while the 90D have only 0.01µ^2. More surface area means more light collected. In this case, the pixels of the 6D MkII are exposed to more than 3 times as much light as the 90D. All other factors being equal, the 6D MkII would capture in one minute the same level of brightness the 90D would capture in over 3 minutes.

But then, not all image sensors are equally sensitive. I can’t find the statistics on the 90D, but the 6D MkII has a peak Quantum Efficiency (Qe) of about 52%. This means that roughly 52% of the photons of light that reach the sensor are detected. I don’t know what the 90D’s Qe is and can’t find it. It might be higher. But that has to be balanced against the size of the pixels. If the sensor’s Qe is 60%, the 6D Mk II would still capture light more rapidly due to the larger pixels.

You must also consider the issues with sensor noise. Astrophotography is actually more a game of signal processing than conventional photography. With AP, we are concerned with signal to noise ratio, while for conventional photography, the sensor noise is usually too minor to notice in comparison to the well-lit subject and background. With AP, the target is usually very faint and we are trying hard to boost the SNR to produce a good image.

And here’s another issue to consider: sensor size versus lens/telescope light spot. This is less an issue for camera lenses: if you get a lens that’s intended to be used with a given camera, this shouldn’t be a problem. But when dealing with telescopes, you have to take into account the diameter of the light cone at the point of focus. Most telescopes, even those intended for astrophotography, will not provide full illumination to a full-frame sensor such as that in the 6D MkII. Most of these WILL work with an APS-C sensor, but not all. If you’re imaging through a telescope, this is an important consideration.

Another problem you'll run into is that  nearly all conventional digital cameras are produced with a built-in filter. This filter is typically installed directly over the image sensor and does two things. First, and least important, it helps protect the sensor from dust, moisture, etc…. This is a secondary function, but still somewhat important. The big issue for AP is that this sensor is designed to limit longer-wavelength light.

In the electromagnetic spectrum, light toward the blue-end of the spectrum has a shorter wavelength (about 400nm) than light at the red-end (about 700nm). While (most of us) obviously can see the color red, as it turns out our eyes are really not well-suited to it. If your eyes were equally sensitive to all wavelengths of visible light, everything would seem a lot pinker. Digital cameras don’t have this problem, and often their peak Qe is closer to red than green or blue. As such, if you do not adjust the color balance, an image taken with a digital camera will seem very red-tinted.

To aid in the color balancing, the filter over the image sensor is designed to reduce the amount of red light reaching the sensor. I’ve heard various amounts, but it’s somewhere around 2/3 to 90% or more of longer-wavelength light.

And for AP, this is a big issue, because there’s a lot of interesting stuff at longer wavelengths. In particular, this:

(Shamelessly taken from Wikipedia)

This is an object known as NGC2244 (actually it’s cataloged under NGC2237, NGC2238, NGC2239, NGC2244, and NGC2246… as different parts were originally believed to be different objects) , or Caldwell 49 (for the nebula) and Caldwell 50 (for the star cluster), and is probably best known as the Rosette Nebula. A lot of the nebulosity in this image is from light with a wavelength of about 656.28nm. We refer to this as Hydrogen Alpha and it is produced when the electron in an atom of hydrogen falls from it’s second-highest to its third-highest energy state. When that happens, a photon of light with a wavelength of 656.28 nm is emitted.

But there’s a problem: that filter I spoke of will block somewhere between 2/3 and 90% of the photons at this wavelength. This means that to capture an image of this object, or many astronomical objects with a lot of red in them, you need even longer exposures still… and/or a lot of exposures.

For a couple decades now, astrophotographers have modified DSLR cameras to remove this filter or replace it with one that doesn’t block red light. But doing this is problematic. Digital cameras are not designed to be modified or repaired by the user. They have lots of intricate and very sensitive parts, and it’s really easy to ruin the camera this way if you don’t know what you’re doing. It also will violate any warranty you might have.

In 2005, in response to this need, Canon released the 20Da, which was a pre-modified version of their 20D. They replaced the sensor with one more transparent to red, and made a few other modifications, primarily in firmware. In 2012, they did it again, releasing the 60Da, a pre-modified 60D. This was not only a few generations newer camera (with a better, higher resolution sensor), but also had a better filter to allow red-light passage and more firmware upgrades for AP use. Nikon, which has not been anywhere near as friendly to the AP community as has Canon, finally jumped on the bandwagon in 2015 with the D810A. A generation or two newer than the 60Da and using a full-frame sensor, it was superior to the 60Da, but also a lot more expensive. It also has a full-frame sensor while the 60Da and 20Da had APS-C sensors. About a year or so ago, Canon did it again and released the EOS Ra, which is an adaptation of the EOS R mirrorless camera.

The problem then with these, however, is that without the standard red filter, images look very red-tinted. You can partly fix this through the use of a white-balance profile, which will mathematically modify the image to balance out the colors. This is usually ok for casual use, but experts and professionals aren’t usually satisfied by it. You can also purchase clip-in filters that go inside the camera body when you detach the lens, and then the lens clicks on over it. But this is and additional cost and you need the right filter for the right camera.

Of course, casual astrophotographers may try to use a stock camera without modification. And I’ve seen good images done this way, but not as good as those done with modified sensors. And even then, better images still are captured by cameras that are designed for astronomical imaging, such as those made by the Santa Barbara Instruments Group (SBIG, which is a part of Diffraction Limited), Finger Lakes Instruments (FLI), and ZWO out of China. These cameras are designed specifically for AP. Many of them are monochrome and can be used with filter wheels, which allow for even better imaging results in the right hands. Unfortunately, they’re not cheap and not multipurpose - they’re designed for astronomy only.

If you want to do AP, you’re probably better off not getting either camera. The best option I can recommend is to find a used and pre-modified Canon DSLR. Sites like the classified ads on Cloudy Nights and Astromart often have them for sale. An older model that’s pre-modified can usually be found for a few hundred dollars, often professionally modified by services that specialize in doing so (I believe a few have manufacturer-certified technicians that can do the mods without voiding the warranty… but you’d have to check to be sure). They also often come with the interface cables needed for computer control (which is extremely important for long-exposure imaging) and and power adapters to plug into standard a/c power or a 12v power source.

I also would STRONGLY recommend joining a local astronomical club or society in your area. Nearly every major and many smaller cities in the US, Canada, and Europe has one, and most such organizations have a least a few people doing AP already. Join up, find those people, and learn from them. AP is not an easy game to play.

Good luck and clear skies!


Comments

Popular Posts