I've been working on lots of things lately and haven't been taking the time to post. Today's task is obtaining calibrated optical magnitudes for brown dwarfs that I've been monitoring with relative photometry in an f814w filter with a bandpass from about 700-950nm. I have already finished analyzing and interpreting the relative photometry, which was my main scientific objective since this type of photometry offers the best sensitivity for detecting new variable objects -- but the calibrated magnitudes are also of interest, especially because for many of the objects I measured, no published optical magnitudes exist.
The first step is to obtain a photometric calibration: that is, a means of converting the total number of ADUs measured for a given star on a given image (e.g., using aperture photometry) into a calibrated magnitude for the star. For a simple calibration, only a single constant is necessary. For example, the magnitude of a star is:
m = zeropoint - 2.5*log(ADU/t)
Where the log is base 10; t is the exposure time in seconds; and zeropoint, the single constant of the calibration, is the photometric zero point magnitude: that is, the magnitude a star would have if it produced a flux of only 1 ADU/sec with the telescope, detector, and filter being calibrated. Note that in most observing situations, objects with a flux of 1 ADU/sec are well below the detection threshold: hence, one cannot actually image an object whose brightness equals the zeropoint magnitude. Because I don't like this property, I prefer to use a different constant in my calibrations: the ADU/sec for a tenth magnitude star, or TMCS (Tenth Magnitude Counts per Second). The magnitude formula is then:
m = 10.0 - 2.5*log(ADU/(t*TMCS))
Suppose then that I have several images (all in the same filter) of a calibration field containing several standard stars of known magnitudes. From each star on each image I can calculate a photometric calibration (whether I choose to use the zero point or TMCS is unimportant). The interesting question is how to average together the different calibrations to find a final, master calibration -- and how to calculate the uncertainty of this final calibration.
If uncertainties on the individual magnitudes of the stars are available, one can take them at face value and proceed as follows:
1. Find the mean and standard error of the ADUs measured for each given star across the different images.
2. Use these values to calculate a separate photometric calibration for each standard star, including an uncertainty estimate that takes into account the contributions from the published uncertainty on the magnitude and the standard error of the measured ADUs.
3. To arrive at the final calibration, take a mean over the calibrations obtained for each individual star, weighted by the inverse squares of the uncertainties calculated in the previous step. Calculate the uncertainty of this final calibration accordingly.
I could not adopt this method with all of my standard star fields, however, because in some of them the published magnitudes of the stars did not have uncertainties. Even where uncertainties exist, I prefer not to take them at face value, since it is difficult to be sure than any given standard star is not a low-level variable, in which case the published uncertainty is likely to be an underestimate.
I could use a very simple method: simply find the unweighted mean and the standard error over all of the individual calibration measurements, with no prior averaging over any star or image. However, this could produce very erroneous results in the following scenario: Suppose there are only two stars standard stars, but one of them is an unknown variable that has changed brightness markedly since it was identified as a standard. The calibrations derived from the two stars are therefore quite inconsistent, and there is no way to tell which is closest to the true value. Suppose further that there are many images: say, 18. Thus the standard error will consist of the standard deviation of 36 measurements divided by the square root of 36. Thus it will be one-sixth the standard deviation. However, the uncertainty on the calibration is really about half the difference between the mean calibrations from each star, which will be comparable to the standard deviation: six times larger than the standard error. Thus the error will be severely underestimated.
My way of resolving this problem and producing a final uncertainty measurement with contributions from both the uncertainties of the stellar magnitudes and those of the ADU measurements on the images was, I think rather clever. Here is how it works:
1. Calculate an independent calibration from each star on each image.
2. For each individual star, find the mean and standard error of its calibration across all images.
3. Construct a weighted average of these single-star calibrations to arrive at a final calibration and uncertainty.
Thus far I have obtained a good calibration, which weights bright stars with consistent measurements on my images most strongly. However, its uncertainty is underestimated because it takes into account only the scatter in measurements on my images, not the uncertainties of the published magnitudes that were input to the calculation. I proceed to estimate this contribution and then calculate the final uncertainty as follows:
4. For each individual image, find the mean and standard error of the calibrations obtained for all stars in that image.
5. Find the average of these individual-image standard errors.
6. Add (in quadrature, of course) this average standard error to the previously calculated uncertainty on the final calibration.
This method will produce an accurate uncertainty in the case noted above of two stars yielding discrepant calibrations. Its only disadvantage is that in calculating the individual-image standard errors and folding in their average value to the final uncertainty, it effectively double-counts the random noise in the measurements from calibration images. Thus, where the published magnitudes of the standard stars are highly precise and accurate, the final uncertainty will be overestimated. However, in most cases the measurement noise will be less than the true uncertainty in the published magnitudes, so this overestimation of the uncertainty will not be severe. Furthermore, it is usually much better to overestimate than to underestimate the uncertainty of one's measurements, since the latter can lead to serious scientific misinterpretation of results that are in fact due to nothing more than measurement errors.
As a final note: a good photometric calibration, for ground based data, should also include a determination of how the measured flux varies with airmass. A linear fit with associated uncertainty is usually sufficient.
Sidereal Investigator
Friday, August 1, 2014
Tuesday, July 15, 2014
Non-variable brown dwarfs, etc.
I have spent most of the day making figures presenting time series photometry of brown dwarfs that are not variable. This was photometry from the Spitzer Space Telescope in the course of the Weather on Other Worlds program. The figures are important for the paper we are about to publish presenting the main results of this survey. However, constructing them was boring, repetitive work and doesn't give me much to write about here. One interesting thing that did come out of the figures was a graphic illustration of just how much better Spitzer/IRAC photometry of late T dwarfs was in the 4.5 micron channel vs. the 3.6 micron channel. For most objects, 4.5 microns was comparable but not quite as precise as 3.6 microns. For the late T dwarfs 4.5 microns was far better. This, of course, is because the late T dwarfs are much brighter at 4.5 microns relative to 3.6 microns, and so there is far less noise in the data at the longer wavelength.
Since there is little to write about my own scientific activities for the day, I will mention a few interesting papers I have read recently.
Many of us, of course, are thinking about new science that will be enabled by results from the GAIA mission. One somewhat neglected area is occultation science. GAIA will produce an exquisite astrometric catalog that should make catalog-related systematics in measured positions of asteroids and Kuiper Belt Objects (KBOs) a thing of the past. Thus, provided ground-based observers keep observing these objects, we should soon be able to calculate considerably improved orbits for them. At the same time, GAIA will tell us where the stars are with greater accuracy -- thus improving our ability to predict occultations in two very significant ways. I found a paper that presents an interesting study of some of these issues: Tanga and Delbo (2007).
Two further points on this topic: by removing systematic catalog biases, GAIA will increase the potential benefit of highly intensive observations of known objects. The accumulation of hundreds to thousands of well-timed observations, even with relatively small telescopes, could enable us to enter a new era in terms of the accuracy with which the orbits of asteroids and Kuiper Belt Objects are known. Lastly, occultations by small known Kuiper Belt Objects are extremely interesting because the albedo of such objects is highly uncertain, and occultation timing could allow us to measure it. This feeds directly into our estimates of the total mass of the Kuiper Belt.
On a completely different topic: I did some reading recently about dynamical measurements of the most massive stars known. I was curious because for some time I have felt an inconsistency between statements of the theoretical (and even observational) upper limits to the stellar mass function on the one hand, and measured masses of O stars on the other. The former set the limits in the range of 100-200 solar masses. The latter (actual measurements of O stars) I understood to be usually coming out at less than 50 solar masses. I read two papers that measured the masses of some of the most massive stellar binaries known: Schnurr et al.(2008) and Niemela et al.(2008). The former, in particular, resolved my confusion. The most massive main-sequence stars are not O stars, they are actually emission-line stars with Wolf-Rayet classifications. Even though most Wolf-Rayet stars are evolved objects, a small minority are instead the most massive hydrogen-fusing stars known. Their emission lines and resulting Wolf-Rayet classification are due to their intense stellar winds. This was a very interesting and satisfying recognition for me, and is I think a major discovery in astronomy that I was unaware of at the time.
Since there is little to write about my own scientific activities for the day, I will mention a few interesting papers I have read recently.
Many of us, of course, are thinking about new science that will be enabled by results from the GAIA mission. One somewhat neglected area is occultation science. GAIA will produce an exquisite astrometric catalog that should make catalog-related systematics in measured positions of asteroids and Kuiper Belt Objects (KBOs) a thing of the past. Thus, provided ground-based observers keep observing these objects, we should soon be able to calculate considerably improved orbits for them. At the same time, GAIA will tell us where the stars are with greater accuracy -- thus improving our ability to predict occultations in two very significant ways. I found a paper that presents an interesting study of some of these issues: Tanga and Delbo (2007).
Two further points on this topic: by removing systematic catalog biases, GAIA will increase the potential benefit of highly intensive observations of known objects. The accumulation of hundreds to thousands of well-timed observations, even with relatively small telescopes, could enable us to enter a new era in terms of the accuracy with which the orbits of asteroids and Kuiper Belt Objects are known. Lastly, occultations by small known Kuiper Belt Objects are extremely interesting because the albedo of such objects is highly uncertain, and occultation timing could allow us to measure it. This feeds directly into our estimates of the total mass of the Kuiper Belt.
On a completely different topic: I did some reading recently about dynamical measurements of the most massive stars known. I was curious because for some time I have felt an inconsistency between statements of the theoretical (and even observational) upper limits to the stellar mass function on the one hand, and measured masses of O stars on the other. The former set the limits in the range of 100-200 solar masses. The latter (actual measurements of O stars) I understood to be usually coming out at less than 50 solar masses. I read two papers that measured the masses of some of the most massive stellar binaries known: Schnurr et al.(2008) and Niemela et al.(2008). The former, in particular, resolved my confusion. The most massive main-sequence stars are not O stars, they are actually emission-line stars with Wolf-Rayet classifications. Even though most Wolf-Rayet stars are evolved objects, a small minority are instead the most massive hydrogen-fusing stars known. Their emission lines and resulting Wolf-Rayet classification are due to their intense stellar winds. This was a very interesting and satisfying recognition for me, and is I think a major discovery in astronomy that I was unaware of at the time.
Thursday, July 10, 2014
An Image of Messier 7
I was observing at Cerro Tololo Inter-American observatory across my wife's birthday this spring. My main science target was a specific field of main-belt asteroids, which could be observed all night except for about an hour after sunset and an hour and a half before dawn. I tried to think of observations for that last hour and a half that would be of potential scientific value, and also produce a beautiful picture to give to my wife. I decided to target the open cluster Messier 7 and the rich starfields of the Milky Way behind it. Over the weekend of July 4, I finally got around to processing the data. It turned out to be quite a challenge because of the huge DECam images, the rich starfield including many severely saturated bright stars, and my desire to create images that were cosmetically as well as scientifically excellent.
I started with standard processing: dark subtraction and flatfielding of the data (itself non-trivial with DECam). The next step was sky subtraction. I have a good way of doing this for DECam images of ordinary star fields. I fit the sky background of each image with a 2-dimensional polynomial, typically of order two or three. I do the fitting iteratively, using customizable creeping-mean rejection of outlying points above and below the fit. In dense starfields, the creeping mean can be set purposely assymetrical: e.g., to reject the brightest 20% of pixels and the faintest 10%. A setting can usually be found that effectively rejects all the stars and accurately fits the sky. With DECam, the upper and lower halves of each image have to be fit independently, because there is a bias offset between them that does not appear to be consistent from one image to the next.
But the starfields of the Milky Way behind Messier 7 are beyond dense. There is no sky between the stars to measure. My polynomial fits were hopeless.
I had anticipated this problem, however. I made sure that I took several images of a much less dense field of photometric standard stars near the time I observed Messier 7. These were 30 second exposures vs. the successive 1, 10, and 100 second exposures I used for Messier 7, but still I believed they would give the sky background I needed. I extracted the background as follows.
First, I applied my regular program for subtracting iterative polynomial fits to the sky background on the standard star images. Then I subtracted the new, background-cleaned version of each standard star image from the original version. The result was an image consisting of only the smooth, polynomial model obtained for the sky background. I obtained 5 to 6 of these images in each of the g, r, and i filters. Then I normalized them all to have a mean of 1.0 on the DECam detector that I have adopted as my reference (detector N04), and median-stacked these normalized sky-model frames to produce a master sky-model frame for each filter.
I then identified tiny areas of very dark dust cloud in the Messier 7 images, which I believed preserved something close to the true sky background. I measured the average pixel brightness in these regions and on the corresponding regions on the master sky-model frames. The backgrounds on the 100-second images of Messier 7 were typically 250 ADU in the g filter, 700 in the r filter, and 1300 in the i filter -- thus, I could approximate the true sky background by multiplying the normalized master sky-model frames by approximately 250, 700, and 1300 respectively. However, I did not trust my dark regions to be absolutely dark, and I did not want to produce negative pixel values anywhere. Thus, I decided to scale the sky-model images to match 90%, rather than 100%, of the measured brightness in the dark-clouded regions of the science images. This is equivalent to assuming that even in the dark, dusty regions, 10% of the light comes from unresolved stars or nebula and only 90% is true telluric sky background. After scaling the sky model images, I of course subtracted them from the corresponding images of Messier 7. I did not try to correct the 1 second exposures this way, only the 10 and 100 second exposures. The sky brightness values measured for the 10 vs. 100-second exposures did differ from one another by about a factor of 10. While not in itself a proof that my method was working, this did constitute an encouraging sanity check.
I encountered a problem with the bias offsets between the upper and lower halves of each image. These were not consistent between the standard star images used to calculate the sky background and the Messier 7 images. Thus, the sky-subtracted Messier 7 images had discontinuities in mean brightness between the upper and lower halves of individual DECam detector images. Reasoning that it was impossible to accurately measure or correct the bias offsets for the science frames, I simply wrote a program to remove the mean offsets between the upper and lower images halves in the master sky frames. Thus, any bias offset remaining in the Messier 7 images would at least be intrinsic to those images and not exacerbated by the sky subtraction. In fact, any such biases turned out to be negligible at least for the 100 second exposures.
I had 5 to 6 images of Messier 7 at each exposure in each filter. I obtained an independent astrometric solution for each detector using stars from the 2MASS catalog (of which there were on average about 2600 yielding accurate measurements on each detector). I then resampled the individual detector images onto a single astrometric grid spanning 32000x28000 pixels. The individual resampled images were median-stacked, with the gaps between the images masked. I had used a well-optimized dithering strategy, so no internal gaps remained near Messier 7 after the 5-6 images in each set had been stacked.
Bright, severely saturated stars in DECam images produce long horizontal streaks due to charge-bleeding. Where one of these streaks extends to the edge of a detector, it affects the region below it by depressing the pixel values by a large and spatially varying additive constant. I removed this effect by masking all negative pixels prior to the final stacks. Even though this widened the effective gaps between detectors that had bright stars on them, it still did not create internal gaps in the final stacked images.
The starfields in the 100 second stacked images were impressive, but the long charge-bleed streaks from the bright stars were unsightly. I attempted to remove them by replacing all saturated pixels in the 100 second stacked images with 10.0 times the corresponding pixel values in the 10-second exposure stacks. This helped, but the resampling had caused the boundaries of saturated streaks not necessarily to have pixel values above the saturation threshold. Thus, uncorrected outlines remained wherever there had been saturated bleed-streaks. Additionally in many cases the pixels immediately adjacent to the bleed streaks were seriously affected even though no charge had bled into them directly. I dealt with these problems through a two-step approach. First, I wrote a program to change the brightness of pixels adjacent to saturated pixels by setting them too above the saturation threshold, so they would be sure to be identified as deviant and then replaced. I applied this program to the images prior to the astrometric resampling. Second, I changed my astrometric resampling program so that if any of the four pixels adjacent to a given location was above the saturation threshold, it would assign the pixel value at that location in the resampled image using nearest-neighbor resampling rather than bilinear interpolation. This preserved the hard edges of the bleed-streaks, preventing the streak-boundary issues previously caused by the resampling. The median-stacking could in some cases yield values of half the true saturation threshold for pixels at the edge of the bleed streaks. This last problem was easily rectified by setting the nominal saturation threshold for pixel replacement to just under half the actual saturation threshold.
This finally produced acceptable results. The saturated bleed-streaks from the 100-second images were effectively removed without noticeable residuals. Much shorter bleed streaks remained around the very brightest stars, because these had been saturated even on the 10-second exposures. I tried to correct them using data from the 1-second exposures, but it was too noisy and the background control was insufficient for these short, bias-dominated images. This is not necessarily hopeless -- for example, these exposures might be so short that polynomial-fitting sky subtraction could work on them directly despite the richness of the starfield. However, I decided my results were sufficiently good already -- in fact, the 10-second bleed streaks seem aesthetically desirable in some cases -- and I should not spend any more time on this problem. I stacked the images in GIMP, using the color assignment i=R, r=G, and g=B. The GIMP processing took hours longer than I expected due to GIMPs inefficiency on very large images. However, at last an excellent-looking RGB image was produced. I saved it as a JPEG and cropped away the ragged edges to yield a rectangular image.
Besides its potential public appeal, the fits files that went into this final image can be used to extract accurate photometry and astrometry from millions of stars. Scientific potential for such an extraction includes the identification of faint, high proper-motion objects, including previously unknown, low-mass or ejected members of the open cluster Messier 7 itself. I have no time to obtain photometry/astrometry on these images now, however, nor to process the u-band data that I also took at the same time (which may in fact have the greatest scientific value). At present, however, I already have a beautiful picture of Messier 7, and one with future scientific potential. Here is a woefully down-sampled version of it:
I started with standard processing: dark subtraction and flatfielding of the data (itself non-trivial with DECam). The next step was sky subtraction. I have a good way of doing this for DECam images of ordinary star fields. I fit the sky background of each image with a 2-dimensional polynomial, typically of order two or three. I do the fitting iteratively, using customizable creeping-mean rejection of outlying points above and below the fit. In dense starfields, the creeping mean can be set purposely assymetrical: e.g., to reject the brightest 20% of pixels and the faintest 10%. A setting can usually be found that effectively rejects all the stars and accurately fits the sky. With DECam, the upper and lower halves of each image have to be fit independently, because there is a bias offset between them that does not appear to be consistent from one image to the next.
But the starfields of the Milky Way behind Messier 7 are beyond dense. There is no sky between the stars to measure. My polynomial fits were hopeless.
I had anticipated this problem, however. I made sure that I took several images of a much less dense field of photometric standard stars near the time I observed Messier 7. These were 30 second exposures vs. the successive 1, 10, and 100 second exposures I used for Messier 7, but still I believed they would give the sky background I needed. I extracted the background as follows.
First, I applied my regular program for subtracting iterative polynomial fits to the sky background on the standard star images. Then I subtracted the new, background-cleaned version of each standard star image from the original version. The result was an image consisting of only the smooth, polynomial model obtained for the sky background. I obtained 5 to 6 of these images in each of the g, r, and i filters. Then I normalized them all to have a mean of 1.0 on the DECam detector that I have adopted as my reference (detector N04), and median-stacked these normalized sky-model frames to produce a master sky-model frame for each filter.
I then identified tiny areas of very dark dust cloud in the Messier 7 images, which I believed preserved something close to the true sky background. I measured the average pixel brightness in these regions and on the corresponding regions on the master sky-model frames. The backgrounds on the 100-second images of Messier 7 were typically 250 ADU in the g filter, 700 in the r filter, and 1300 in the i filter -- thus, I could approximate the true sky background by multiplying the normalized master sky-model frames by approximately 250, 700, and 1300 respectively. However, I did not trust my dark regions to be absolutely dark, and I did not want to produce negative pixel values anywhere. Thus, I decided to scale the sky-model images to match 90%, rather than 100%, of the measured brightness in the dark-clouded regions of the science images. This is equivalent to assuming that even in the dark, dusty regions, 10% of the light comes from unresolved stars or nebula and only 90% is true telluric sky background. After scaling the sky model images, I of course subtracted them from the corresponding images of Messier 7. I did not try to correct the 1 second exposures this way, only the 10 and 100 second exposures. The sky brightness values measured for the 10 vs. 100-second exposures did differ from one another by about a factor of 10. While not in itself a proof that my method was working, this did constitute an encouraging sanity check.
I encountered a problem with the bias offsets between the upper and lower halves of each image. These were not consistent between the standard star images used to calculate the sky background and the Messier 7 images. Thus, the sky-subtracted Messier 7 images had discontinuities in mean brightness between the upper and lower halves of individual DECam detector images. Reasoning that it was impossible to accurately measure or correct the bias offsets for the science frames, I simply wrote a program to remove the mean offsets between the upper and lower images halves in the master sky frames. Thus, any bias offset remaining in the Messier 7 images would at least be intrinsic to those images and not exacerbated by the sky subtraction. In fact, any such biases turned out to be negligible at least for the 100 second exposures.
I had 5 to 6 images of Messier 7 at each exposure in each filter. I obtained an independent astrometric solution for each detector using stars from the 2MASS catalog (of which there were on average about 2600 yielding accurate measurements on each detector). I then resampled the individual detector images onto a single astrometric grid spanning 32000x28000 pixels. The individual resampled images were median-stacked, with the gaps between the images masked. I had used a well-optimized dithering strategy, so no internal gaps remained near Messier 7 after the 5-6 images in each set had been stacked.
Bright, severely saturated stars in DECam images produce long horizontal streaks due to charge-bleeding. Where one of these streaks extends to the edge of a detector, it affects the region below it by depressing the pixel values by a large and spatially varying additive constant. I removed this effect by masking all negative pixels prior to the final stacks. Even though this widened the effective gaps between detectors that had bright stars on them, it still did not create internal gaps in the final stacked images.
The starfields in the 100 second stacked images were impressive, but the long charge-bleed streaks from the bright stars were unsightly. I attempted to remove them by replacing all saturated pixels in the 100 second stacked images with 10.0 times the corresponding pixel values in the 10-second exposure stacks. This helped, but the resampling had caused the boundaries of saturated streaks not necessarily to have pixel values above the saturation threshold. Thus, uncorrected outlines remained wherever there had been saturated bleed-streaks. Additionally in many cases the pixels immediately adjacent to the bleed streaks were seriously affected even though no charge had bled into them directly. I dealt with these problems through a two-step approach. First, I wrote a program to change the brightness of pixels adjacent to saturated pixels by setting them too above the saturation threshold, so they would be sure to be identified as deviant and then replaced. I applied this program to the images prior to the astrometric resampling. Second, I changed my astrometric resampling program so that if any of the four pixels adjacent to a given location was above the saturation threshold, it would assign the pixel value at that location in the resampled image using nearest-neighbor resampling rather than bilinear interpolation. This preserved the hard edges of the bleed-streaks, preventing the streak-boundary issues previously caused by the resampling. The median-stacking could in some cases yield values of half the true saturation threshold for pixels at the edge of the bleed streaks. This last problem was easily rectified by setting the nominal saturation threshold for pixel replacement to just under half the actual saturation threshold.
This finally produced acceptable results. The saturated bleed-streaks from the 100-second images were effectively removed without noticeable residuals. Much shorter bleed streaks remained around the very brightest stars, because these had been saturated even on the 10-second exposures. I tried to correct them using data from the 1-second exposures, but it was too noisy and the background control was insufficient for these short, bias-dominated images. This is not necessarily hopeless -- for example, these exposures might be so short that polynomial-fitting sky subtraction could work on them directly despite the richness of the starfield. However, I decided my results were sufficiently good already -- in fact, the 10-second bleed streaks seem aesthetically desirable in some cases -- and I should not spend any more time on this problem. I stacked the images in GIMP, using the color assignment i=R, r=G, and g=B. The GIMP processing took hours longer than I expected due to GIMPs inefficiency on very large images. However, at last an excellent-looking RGB image was produced. I saved it as a JPEG and cropped away the ragged edges to yield a rectangular image.
Besides its potential public appeal, the fits files that went into this final image can be used to extract accurate photometry and astrometry from millions of stars. Scientific potential for such an extraction includes the identification of faint, high proper-motion objects, including previously unknown, low-mass or ejected members of the open cluster Messier 7 itself. I have no time to obtain photometry/astrometry on these images now, however, nor to process the u-band data that I also took at the same time (which may in fact have the greatest scientific value). At present, however, I already have a beautiful picture of Messier 7, and one with future scientific potential. Here is a woefully down-sampled version of it:
Thursday, July 3, 2014
Finishing up a photometric survey
Today I have finished reducing all of my photometric monitoring data for a survey of 12 T dwarfs in the red optical. For every object, I have several hours of photometry on at least two different nights. This is the first survey of its kind, because prior to this T dwarfs have mostly been observed in the near infrared. They are very faint in the red optical, but can have higher variability amplitudes. At the beginning of this survey, I expected that none of the T dwarfs would be variable with amplitude above 10%. This expectation turns out to be wrong. One of them has a variability amplitude of nearly 20%.
I have been working on this project for several weeks, carefully optimizing the photometric aperture and sky subtraction method for each T dwarf, using the RMS scatter of photometry of field stars of similar brightness. Optimal apertures tend to be smaller for fainter objects and for nights of better seeing, while brighter objects or nights of bad seeing require larger photometric apertures. The optimal type of sky subtraction for the images as a whole can vary from no sky subtraction, to sky subtraction based on a single median-stack of normalized images from the entire night, to sky-subtraction based on multiple sky frames such as I described in my previous post. It has been a surprise to me that the multi-frame sky subtraction did not always produce the best photometry. There may be a number of reasons for this. The shot noise is higher in sky frames created from an average of fewer images, and this puts the multi-frame approach at a disadvantage. The masking of stars that I employ in this approach can (if the data are sparse or affected by clouds) introduce image artifacts that affect the photometry. However I must say that although I think I understand the big picture of why the optimal aperture changes from night to night and from object to object, I do not feel that I have a similar understanding of why the optimal type of image-based sky subtraction varies.
Whatever type of image-based sky subtraction I have used, I always additionally subtract the mean in an annulus around each object for which I obtain photometry (as is, of course, standard practice for aperture photometry in astronomy).
One of the last problems I had to deal with was the satellite streak in this image:
it wiped out two of the 54 stars I was using as photometric references on this frame. My relative photometry algorithm is designed to deal with outliers, but this streak is so intense that it not only made the two stars that are behind it positive outliers -- it made all the other stars in the image negative outliers, since the outlier identification uses the ratio of each star to the summed fluxes of all the others. So the satellite really wreaked havoc on this frame even though only two stars were directly affected. I could have thrown out the frame, but that was a more serious loss of data than rejecting the two stars. However, since I use a consistent set of reference stars across all nights, the rejection of two stars meant the re-doing of photometry on other nights where the same object was observed. This is all done and finalized now, however.
What I am planning to do next is obtain absolute photometry for each object, just to have a reference-point for comparing fluxes. I am also planning to stack photometric monitoring data from different T dwarfs on different nights as a function of airmass, to look for a systematic dependence of the relative brightness on airmass. There should be a weak dependence due to the increased effect of telluric water vapor absorption on T dwarf photometry relative to stars (because the T dwarf flux is heavily weighted toward the red edge of the f814w filter I used for the observations, and that includes the telluric water feature near 950 nm). The airmass effect is not visible in the photometry of individual T dwarfs, but if I can measure it in the aggregate over all the nights, it will give me a measure of the mean precipitable water vapor in the atmosphere during these observations.
I have been working on this project for several weeks, carefully optimizing the photometric aperture and sky subtraction method for each T dwarf, using the RMS scatter of photometry of field stars of similar brightness. Optimal apertures tend to be smaller for fainter objects and for nights of better seeing, while brighter objects or nights of bad seeing require larger photometric apertures. The optimal type of sky subtraction for the images as a whole can vary from no sky subtraction, to sky subtraction based on a single median-stack of normalized images from the entire night, to sky-subtraction based on multiple sky frames such as I described in my previous post. It has been a surprise to me that the multi-frame sky subtraction did not always produce the best photometry. There may be a number of reasons for this. The shot noise is higher in sky frames created from an average of fewer images, and this puts the multi-frame approach at a disadvantage. The masking of stars that I employ in this approach can (if the data are sparse or affected by clouds) introduce image artifacts that affect the photometry. However I must say that although I think I understand the big picture of why the optimal aperture changes from night to night and from object to object, I do not feel that I have a similar understanding of why the optimal type of image-based sky subtraction varies.
Whatever type of image-based sky subtraction I have used, I always additionally subtract the mean in an annulus around each object for which I obtain photometry (as is, of course, standard practice for aperture photometry in astronomy).
One of the last problems I had to deal with was the satellite streak in this image:
it wiped out two of the 54 stars I was using as photometric references on this frame. My relative photometry algorithm is designed to deal with outliers, but this streak is so intense that it not only made the two stars that are behind it positive outliers -- it made all the other stars in the image negative outliers, since the outlier identification uses the ratio of each star to the summed fluxes of all the others. So the satellite really wreaked havoc on this frame even though only two stars were directly affected. I could have thrown out the frame, but that was a more serious loss of data than rejecting the two stars. However, since I use a consistent set of reference stars across all nights, the rejection of two stars meant the re-doing of photometry on other nights where the same object was observed. This is all done and finalized now, however.
What I am planning to do next is obtain absolute photometry for each object, just to have a reference-point for comparing fluxes. I am also planning to stack photometric monitoring data from different T dwarfs on different nights as a function of airmass, to look for a systematic dependence of the relative brightness on airmass. There should be a weak dependence due to the increased effect of telluric water vapor absorption on T dwarf photometry relative to stars (because the T dwarf flux is heavily weighted toward the red edge of the f814w filter I used for the observations, and that includes the telluric water feature near 950 nm). The airmass effect is not visible in the photometry of individual T dwarfs, but if I can measure it in the aggregate over all the nights, it will give me a measure of the mean precipitable water vapor in the atmosphere during these observations.
Friday, March 21, 2014
Precision sky subtraction
I am preparing for a trip to Chile to observe extremely faint asteroids using DECam, and to visit a collaborator. In parallel with this, I am working on processing data from my March 6-13 observing run on the Kitt Peak 2.1m telescope. My main objective in processing this data now is to get astrometry on the faint asteroids we observed, because this is highly time-sensitive. At the same time, it is most efficient to do some processing on the additional data while I have the files open and everything is fresh in my mind. The additional data, which was actually the main objective of the Kitt Peak observations, consists of images for photometric monitoring of brown dwarfs in the red optical (the f814w filter, which is similar to the I-band but wider).
Optimal photometry of faint objects in the red optical requires accurate sky subtraction. This is because the strong interference fringes produced in silicon detectors by night-sky emission lines at these wavelengths produce a bright and spatially varying sky background. I have some tricks for accurate background subtraction that will become apparent as I describe the sequence of operations.
Observing: Every image should be taken in a different position from any other. The telescope pointing should not be altered very much, just a small fraction of the field of view, but the pointings should be dithered in a non-redundant manner, as far as is possible. The data set I worked with today had 63 images, each a 200 second exposure in the f814w filter. The dithering was good: it was almost entirely non-redundant.
Processing: I begin by trimming off the overscan region (which is useless on the CCD I was using) and also trimming off edge regions extending 10-20 pixels into the valid data region, where the sky background and flatframes for this CCD behave strangely. Then I follow the usual steps of zero subtraction and flatfielding. After that it gets interesting.
The first step is to do an approximate sky subtraction, which is required to prepare the final, much more accurate sky subtraction. To do this, I normalize all of the processed science images to have a sky background of 1.0, and then do a median-stack of them. Due to the non-redundant dithering, the stars mostly vanish in this stack, leaving me with a fairly clean image representing the average sky background through the night. Call this the master sky image.
I then perform sky subtraction on each science image by creating a version of the master sky image that is scaled to have the same sky background, and then subtracting that from the science image. This sky subtraction is fairly effective, producing images that are substantially cleaned of fringing, etc.
I then pad, shift, rotate, and median-stack all of the science images to create a master image that has much lower noise and shows much fainter objects than each individual science image. Here is an example of my master science image at this point:
Very faint stars and galaxies can be seen, but there are dark halos around bright stars. This is because weak residuals of the bright stars were still present in the master sky image: despite the non-redundant dithering, these stars did not completely vanish from the median stack used to create the sky image. The master science image above also has dark and noisy borders, and the edges of some individual frames can still be seen. This shows that the master sky image, besides retaining weak residuals of bright stars, was also imperfect in other ways. Such imperfection is not surprising, because my processing thus far has assumed that the sky background remained exactly the same, except for its average brightness, throughout the entire night. In fact the sky background is composed of different components: fringing from night-sky emission lines; moonlight in the sky; scattered moonlight in the dome, etc. The relative contributions of these different components change over time, and therefore using a single sky frame for the whole night is a rather crude approximation.
I can solve this problem by dividing up the science images into several sets, and producing different sky frames for different parts of the night. However, if I do this, the problem of the bright stars not entirely vanishing from the median stacks used to make the sky frames will get much worse. For example, a median stack of ten images will have much stronger residuals from bright stars, even if every image was taken at a distinct pointing, than a median stack of 63 images.
What is needed is to mask the stars, especially the bright ones. But it is not sufficient simply to mask the bright cores of the stars: those vanish in the median combine anyway. It is the faint halos around the bright stars that cause the most trouble. Thus, it is necessary to mask the stellar halos -- extending out even to where they are so faint that they may not be detectable on individual frames.
I deal with this be creating a star mask based on the master image, in a stepwise fashion. For this data set, I masked all pixels within:
-A 25-pixel radius of any pixel with brightness above 50000 ADU, or
-A 20-pixel radius of any pixel with a brightness above 10000 ADU, or
-A 12-pixel radius of any pixel with a brightness above 1000 ADU, or
-A 6-pixel radius of any pixel with a brightness above 100 ADU, or
-an individual brightness above 20 ADU
Where all of these cuts are performed on the low-noise master image. I should say that the noisy edges of the master image, where only a few science images contribute data, are trimmed off before the mask is constructed.
It remains to apply this mask to the individual science images, each of which is taken at a different pointing. To handle this, I made my shifting and rotation routine save a log file describing exactly what shifts and rotations were applied to each image. Thus, I can mask each science image by simply creating a version of the master mask-image with the reverse shift and rotation appropriate to that particular science image applied. This produces a set of images that are unaltered from the original, individual science frames, except that all of the stars are masked -- including even the objects too faint to be detected on individual science frames, and the faint, extended halos of bright stars.
I normalize these masked images to have a mean sky background of 1.0, and stack them in subsets to make multiple master sky images that follow the changes in the sky background throughout the night. For the example of this data set, I created five master sky images from masked, normalized versions of science images 1-13, 14-26, 27-39, 40-52, and 53-63, respectively. Because of the masking, these master sky images had no residual star images at all. The non-redundant dithering prevented blank holes from appearing in these master sky images, by ensuring that there was no pixel region that was masked on all the frames used to produce a given master sky image. Thus, the master sky images had valid data everywhere, an outcome that would have been unlikely if the observers had not taken pains to make the dithers non-redundant. The fringing in these sky images looks cool, so I show them here. Note the subtle changes in the contrast of the fringes and in the overall distribution of the background; in particular the contrast of the fringes is higher in the later images because the Moon had set:
I subtracted scaled versions of these master sky frames from the corresponding science images. The resulting final sky subtraction was considerably better than the preliminary version, as evidenced by the new master science image made from the data with improved sky subtraction. I show both the old and new master images together here, for comparison:
Old:
New:
While some edge effects remain, they are greatly reduced, and the dark halos around bright stars have completely vanished. Neither fringing nor any other sky-related effect remains to introduce a spurious signal into our time-series photometry of the faint T dwarf that was the science target of these observations.
Optimal photometry of faint objects in the red optical requires accurate sky subtraction. This is because the strong interference fringes produced in silicon detectors by night-sky emission lines at these wavelengths produce a bright and spatially varying sky background. I have some tricks for accurate background subtraction that will become apparent as I describe the sequence of operations.
Observing: Every image should be taken in a different position from any other. The telescope pointing should not be altered very much, just a small fraction of the field of view, but the pointings should be dithered in a non-redundant manner, as far as is possible. The data set I worked with today had 63 images, each a 200 second exposure in the f814w filter. The dithering was good: it was almost entirely non-redundant.
Processing: I begin by trimming off the overscan region (which is useless on the CCD I was using) and also trimming off edge regions extending 10-20 pixels into the valid data region, where the sky background and flatframes for this CCD behave strangely. Then I follow the usual steps of zero subtraction and flatfielding. After that it gets interesting.
The first step is to do an approximate sky subtraction, which is required to prepare the final, much more accurate sky subtraction. To do this, I normalize all of the processed science images to have a sky background of 1.0, and then do a median-stack of them. Due to the non-redundant dithering, the stars mostly vanish in this stack, leaving me with a fairly clean image representing the average sky background through the night. Call this the master sky image.
I then perform sky subtraction on each science image by creating a version of the master sky image that is scaled to have the same sky background, and then subtracting that from the science image. This sky subtraction is fairly effective, producing images that are substantially cleaned of fringing, etc.
I then pad, shift, rotate, and median-stack all of the science images to create a master image that has much lower noise and shows much fainter objects than each individual science image. Here is an example of my master science image at this point:
Very faint stars and galaxies can be seen, but there are dark halos around bright stars. This is because weak residuals of the bright stars were still present in the master sky image: despite the non-redundant dithering, these stars did not completely vanish from the median stack used to create the sky image. The master science image above also has dark and noisy borders, and the edges of some individual frames can still be seen. This shows that the master sky image, besides retaining weak residuals of bright stars, was also imperfect in other ways. Such imperfection is not surprising, because my processing thus far has assumed that the sky background remained exactly the same, except for its average brightness, throughout the entire night. In fact the sky background is composed of different components: fringing from night-sky emission lines; moonlight in the sky; scattered moonlight in the dome, etc. The relative contributions of these different components change over time, and therefore using a single sky frame for the whole night is a rather crude approximation.
I can solve this problem by dividing up the science images into several sets, and producing different sky frames for different parts of the night. However, if I do this, the problem of the bright stars not entirely vanishing from the median stacks used to make the sky frames will get much worse. For example, a median stack of ten images will have much stronger residuals from bright stars, even if every image was taken at a distinct pointing, than a median stack of 63 images.
What is needed is to mask the stars, especially the bright ones. But it is not sufficient simply to mask the bright cores of the stars: those vanish in the median combine anyway. It is the faint halos around the bright stars that cause the most trouble. Thus, it is necessary to mask the stellar halos -- extending out even to where they are so faint that they may not be detectable on individual frames.
I deal with this be creating a star mask based on the master image, in a stepwise fashion. For this data set, I masked all pixels within:
-A 25-pixel radius of any pixel with brightness above 50000 ADU, or
-A 20-pixel radius of any pixel with a brightness above 10000 ADU, or
-A 12-pixel radius of any pixel with a brightness above 1000 ADU, or
-A 6-pixel radius of any pixel with a brightness above 100 ADU, or
-an individual brightness above 20 ADU
Where all of these cuts are performed on the low-noise master image. I should say that the noisy edges of the master image, where only a few science images contribute data, are trimmed off before the mask is constructed.
It remains to apply this mask to the individual science images, each of which is taken at a different pointing. To handle this, I made my shifting and rotation routine save a log file describing exactly what shifts and rotations were applied to each image. Thus, I can mask each science image by simply creating a version of the master mask-image with the reverse shift and rotation appropriate to that particular science image applied. This produces a set of images that are unaltered from the original, individual science frames, except that all of the stars are masked -- including even the objects too faint to be detected on individual science frames, and the faint, extended halos of bright stars.
Old:
While some edge effects remain, they are greatly reduced, and the dark halos around bright stars have completely vanished. Neither fringing nor any other sky-related effect remains to introduce a spurious signal into our time-series photometry of the faint T dwarf that was the science target of these observations.
Monday, January 27, 2014
Extensions on Magellan Megacam images
I am working on astrometric
registration of Magellan Megacam images. My first task was to read
the multi-extension FITS files. The image files I have possess 36
extensions, each corresponding to one of Megacam's 36 detectors. The
order of the extensions follows four columns of nine detectors each,
starting at the southwest corner of the mosaic. Thus extension 2 is
for the detector immediately north of extension 1, and so on up
through extension 9, which is the ninth detector of the first column
and hence is at the north edge of the mosaic. Then extension 10 is
for the first detector of the second column, and is immediately to
the east of extension 1. This is a very sensible way of arranging
the detectors and extensions.
One thing that confused me at first is
that although the third and fourth columns have the detectors ordered
exactly as you would expect based on the above, each detector image
in these columns is upside-down relative to the corresponding images
in the first and second columns. One can understand how this might
come about due to spatial constraints on the positioning of readout
electronics.
Monday, January 20, 2014
Interesting papers on astro-ph
I frequently lead the 'astronomy pizza' discussions of interesting recent papers on astro-ph that we hold at Stony Brook every Monday, and I want to start posting brief summaries of some of the papers we discussed. Here are today's:
ArXiv:1401.2966 Discovery of a young
asteroid cluster associated with P/2012~F5 (Gibbs) http://arxiv.org/abs/1401.2966
This is a truly remarkable paper.
P/2012~F5 (Gibbs) is an active asteroid. It may be a main-belt comet
or have suffered a recent impact. It is of interest to look for
collisional families associated with active asteroids for two
reasons. First, if such an object is in fact a bona fide main-belt
comet, one possible explanation for the presence of volatiles on its
surface is a recent (few Myr-ago) collisional fragmentation that
exposed previously shielded volatiles in its interior. Second, if
instead an asteroid is active due to a very recent collisional
ejection of dust (months or less before the first observation of
activity), there is still reason to think it might be a member of a
young collisional family, because mutual collisions will (for a
while) be much more common in such families than in the asteroid belt
at large. These authors identify a collisional family of 9 asteroids associated with the active asteroid P/2012~F5 (Gibbs), which has an
orbital semimajor axis of 3 AU and a diameter of about 2 km. They
perform carefully back-integrations of the asteroids' orbits and
arrive at an age of about 1.5 Myr for the cluster – very young in
the context of the main asteroid belt. The largest asteroid in this cluster is considerably larger than the 2 km size of P/2012~F5 (Gibbs) itself, and the minimum diameter for the parent body that was shattered to make the collisional family is 10 km.
arXiv 1401.4000 A new cold
sub-Saturnian candidate planet orbiting GJ 221 http://arxiv.org/abs/1401.4000
This detection was based on a
re-analysis on radial velocity data previously indicating two
shorter-period planets. The shorter period planets are confirmed and
a new sub-Saturn mass planet with period about 500 days is
tentatively detected. This observations would be interesting if
confirmed, but with 99% confidence intervals spanning almost a factor
of three in mass and RV amplitude, it seems very tentative indeed at
present.
arXiv 1401.4388 HESS J1640-465 - an
exceptionally luminous TeV gamma-ray SNR http://arxiv.org/abs/1401.4388
HESS is an array of atmospheric
Cherenkov telescopes in Namibia, for the detection of extremely
high-energy gamma-rays. Of 70 HESS sources found so far (most
seemingly associated with star-forming regions), given its estimated
distance of 8-13 kpc, HESS J1640-465 may be the most luminous source
of TeV gamma-rays in the MWG. The source is intrinsically extended,
and the spectrum from lower-energy experiments connects smoothly with
that of HESS. Previously the X-ray through gamma-ray emission from
this source was attributed to a pulsar-wind nebula (PWN) in which the
high-energy photons would be produced by inverse Compton scattering
from highly relativistic electrons in the pulsar wind. The current
results suggest instead the acceleration of hadrons across a shock in
the supernova remnant G338.3-0.0. The main evidence for this appears
to be the absence of a break in the gamma-ray spectrum that should
appear at the point where the inverse compton/synchrotron loss time
of the parent electron population is similar to the age of the
source. Instead of this, “the featureless gamma-ray spectrum over
almost six decades in energy is challenging for any leptonic model”.
Also the emission is coincident with the shell of G338.3-0.0, and
more extended than a pulsar-wind nebula would be expected to be: it
is a much better match for a supernova remnant shell.
Other papers that I thought looked interesting, but didn't have time to read or discuss:
ArXiv 1401.2931Distribution of Electric
Currents in Solar Active Regions
arXiv 1401.2901 A high resolution
spectroscopic atlas of M subdwarfs - Effective temperature and
metallicity
arXiv 1401.3199 High frequency A-type
pulsators discovered using SuperWASP
arXiv 1401.3692 The GTC exoplanet
transit spectroscopy survey I: OSIRIS transmission spectroscopy of
the short period planet WASP-43b
Subscribe to:
Posts (Atom)