Friday, October 12, 2012

How to Measure the Star Formation Rates of Galaxies

One important question that astronomers try to understand is when in the history of the Universe were the stars in galaxies formed. In order to address this question one can either measure the amount of mass in stars in each galaxy and analyze this as a function of time or age of the Universe. Or one can measure the on-going rate of star formation as a function of the age of the Universe. In this post I would like to focus on star formation rates and how astronomers can measure them. 

The current rate at which a galaxy is forming stars describes just how many new stars there have been born in a limited time period. When stars (or a stellar population) form from a molecular cloud one can (theoretically) count the number of stars with a particular mass and do this for a range of masses, meaning one (theoretically) knows the distribution of stars as a function of stellar mass. This is called the stellar initial mass function. Within this newly formed population of stars the most massive stars are the hottest and burn through their fuel the quickest which means they will live the shortest lives. The hottest stars are also the brightest. So if we can measure how many of the brightest stars there are we can determine the total number of stars that formed using the initial mass function.

Astronomers measure star formation rates with the help of a variety of wavelength ranges from the X-ray to the radio. All of these so-called star formation indicators probe the most massive stars. 

Pillars of creation, a site of star formation in the Milky Way,
credit: NASA, ESA, STScI, J. Hester and P. Scowen (Arizona State University)
Young, massive stars shine particularly bright in the ultra-violet wavelength range of the electro-magnetic spectrum. Naturally, this is the first wavelength range to consider for a measurement on the number of massive stars. Unfortunately, because stars form within and from clouds of gas and dust, the light that they emit is at least partially absorbed by the gas and dust around them, we say the star light is attenuated. Consequently, any emission we still measure from the UV light, reflects only a portion of the stars that have been formed. 

Now there are 2 choices, one can either try to correct for the dust attenuation or try to find the "missing" parts of the UV light. Let's start with the first, correction for dust. Here, again, you have several options. First, you can measure the slope of the galaxy spectrum in the UV and compare it to the slope that one would theoretically expect a spectrum to have. The difference between these two values is the amount by which the spectrum was attenuated (reddened). On the other hand you can fit theoretical galaxy spectra to the entire spectral energy distribution of the galaxy and gain a value of the amount of dust that reddened the galaxy from the best fit. However, there are several reasons why a galaxy's spectral energy distribution can have this shape and dust is only one of them. You can check this previous post to learn more about this issue. Finally, you can measure the strength of spectral lines, in particular hydrogen recombination lines such as Hydrogen alpha and Hydrogen beta, and forbidden oxygen lines ([OII] and [OIII]). These spectral emission lines occur because the most massive and hot stars heat and ionize the gas in their vicinity. When the ionization states change, light at particular wavelengths is emitted, which we then observe as emission lines. This line emission is also affected by dust attenuation.

If you went down the other route, you would need to find the missing light. Well, this UV light was absorbed by the dust around the newly formed stars. Consequently, the dust heats up and then re-radiates the light in the infrared portion of the electro-magnetic spectrum. Clearly, this measurement only provides you with the part of the UV light that was absorbed, so you measure the bit that you missed in the UV. So the combination of the uncorrected UV and IR star formation rate measurement gives you the total star formation rate in the galaxy. 

Unfortunately, the measurement of the IR emission also has disadvantages. For example, dust can also be heated somewhat by evolved, older stellar populations. Furthermore, dust emits the reprocessed light at a range of wavelengths depending on the size of the dust grains. In order to account for all emission a wide wavelength range in the IR needs to be covered and if the spectral energy distribution is too sparsely sampled in this region some portion of the emission might be missed. Moreover, at high redshift we are only able to detect the most luminous IR galaxies, but not those that are more like the Milky Way because they are too faint in the infrared.

I also mentioned at the beginning that X-ray and radio emission can be used to determine star formation rates. However, these are more uncertain as active galactic nuclei often dominate the emission at these wavelengths, hence what we measure does not all come from stars.

Astronomers try to combine various measurements for star formation rates and try to cross-correlate and calibrate the different ways to measure them. However, depending on the redshift of the galaxies in question, not all ways of measurement are possible or even accessible to us. In future posts on CANDELS science you will find out which ways CANDELS members use to get a handle on the star formation rates and what they have learnt from the measurements.

No comments:

Post a Comment