Amazon_Audible_1068x260_2.jpg
  • Barbara Jesline

Do you know: How do we measure distance between stars?

It turns out that measuring the distance to a star is an interesting problem! Astronomers have developed several techniques to indirectly measure the vast distances between Earth and the stars and galaxies. In many cases, these methods are mathematically complex and involve extensive computer modelings


The first technique uses triangulation (a.k.a. parallax). Parallax is the visual effect produced when, as an observer moves, nearby objects appear to shift position relative to more-distant objects. This common event is easily reproduced; hold your finger out at arm’s length, and look at your fingertip first with one eye closed, then the other. The "motion" of your fingertip as seen against background objects is caused by the change in your viewing position -- about three inches from one eye to the other.


As Earth orbits the Sun, astronomers invoke this same principle to determine the distance to nearby stars. Just like your fingertip, stars that are closer to us shift positions relative to more-distant stars, which appear fixed. By carefully measuring the angle through which the stars appear to move over the course of the year, and knowing how far Earth has moved, astronomers are able to use basic high-school geometry to calculate the star’s distance. Parallax serves as the first "inch" on the yardstick with which astronomers measure distances to objects that are even farther.


There is a simple relationship between a star's distance and its parallax angle:

d = 1/p


The distance d is measured in parsecs and the parallax angle p is measured in arcseconds.

This simple relationship is why many astronomers prefer to measure distances in parsecs.




Limitations of Distance Measurement Using Stellar Parallax

Parallax angles of less than 0.01 arcsec are very difficult to measure from Earth because of the effects of the Earth's atmosphere. This limits Earth-based telescopes to measure the distances to stars about 1/0.01 or 100 parsecs away. Space-based telescopes can get accuracy to 0.001, which has increased the number of stars whose distance could be measured with this method. However, most stars even in our own galaxy are much further away than 1000 parsecs, since the Milky Way is about 30,000 parsecs across. The next section describes how astronomers measure distances to more distant objects.

For more distant galaxies, astronomers rely on the exploding stars known as supernovae. Like Cepheids, the rate at which a certain class of supernovae brightens and fades reveals their true brightness, which then can be used to calculate their distance. But this technique also requires good calibration using parallax and Cepheids. Without knowing the precise distances to a few supernovae, there is no way to determine their absolute brightness, so the technique would not work.


There is no direct method currently available to measure the distance to stars farther than 400 light-years from Earth, so astronomers instead use brightness measurements. It turns out that a star's color spectrum is a good indication of its actual brightness. The relationship between color and brightness was proven using the several thousand stars close enough to earth to have their distances measured directly. Astronomers can, therefore, look at a distant star and determine its color spectrum. From the color, they can determine the star's actual brightness. By knowing the actual brightness and comparing it to the apparent brightness seen from Earth (that is, by looking at how dim the star has become once its light reaches Earth), they can determine the distance to the star.