Tuesday, July 15, 2014

Non-variable brown dwarfs, etc.

I have spent most of the day making figures presenting time series photometry of brown dwarfs that are not variable.  This was photometry from the Spitzer Space Telescope in the course of the Weather on Other Worlds program.  The figures are important for the paper we are about to publish presenting the main results of this survey.  However, constructing them was boring, repetitive work and doesn't give me much to write about here.  One interesting thing that did come out of the figures was a graphic illustration of just how much better Spitzer/IRAC photometry of late T dwarfs was in the 4.5 micron channel vs. the 3.6 micron channel.  For most objects, 4.5 microns was comparable but not quite as precise as 3.6 microns.  For the late T dwarfs 4.5 microns was far better.  This, of course, is because the late T dwarfs are much brighter at 4.5 microns relative to 3.6 microns, and so there is far less noise in the data at the longer wavelength.

Since there is little to write about my own scientific activities for the day, I will mention a few interesting papers I have read recently. 

Many of us, of course, are thinking about new science that will be enabled by results from the GAIA mission.  One somewhat neglected area is occultation science.  GAIA will produce an exquisite astrometric catalog that should make catalog-related systematics in measured positions of asteroids and Kuiper Belt Objects (KBOs) a thing of the past.  Thus, provided ground-based observers keep observing these objects, we should soon be able to calculate considerably improved orbits for them.  At the same time, GAIA will tell us where the stars are with greater accuracy -- thus improving our ability to predict occultations in two very significant ways.  I found a paper that presents an interesting study of some of these issues: Tanga and Delbo (2007).

Two further points on this topic: by removing systematic catalog biases, GAIA will increase the potential benefit of highly intensive observations of known objects.  The accumulation of hundreds to thousands of well-timed observations, even with relatively small telescopes, could enable us to enter a new era in terms of the accuracy with which the orbits of asteroids and Kuiper Belt Objects are known.  Lastly, occultations by small known Kuiper Belt Objects are extremely interesting because the albedo of such objects is highly uncertain, and occultation timing could allow us to measure it.  This feeds directly into our estimates of the total mass of the Kuiper Belt.

On a completely different topic: I did some reading recently about dynamical measurements of the most massive stars known.  I was curious because for some time I have felt an inconsistency between statements of the theoretical (and even observational) upper limits to the stellar mass function on the one hand, and measured masses of O stars on the other.  The former set the limits in the range of 100-200 solar masses.  The latter (actual measurements of O stars) I understood to be usually coming out at less than 50 solar masses.  I read two papers that measured the masses of some of the most massive stellar binaries known: Schnurr et al.(2008) and Niemela et al.(2008).  The former, in particular, resolved my confusion.  The most massive main-sequence stars are not O stars, they are actually emission-line stars with Wolf-Rayet classifications.  Even though most Wolf-Rayet stars are evolved objects, a small minority are instead the most massive hydrogen-fusing stars known.  Their emission lines and resulting Wolf-Rayet classification are due to their intense stellar winds.  This was a very interesting and satisfying recognition for me, and is I think a major discovery in astronomy that I was unaware of at the time.

Thursday, July 10, 2014

An Image of Messier 7

I was observing at Cerro Tololo Inter-American observatory across my wife's birthday this spring.  My main science target was a specific field of main-belt asteroids, which could be observed all night except for about an hour after sunset and an hour and a half before dawn.  I tried to think of observations for that last hour and a half that would be of potential scientific value, and also produce a beautiful picture to give to my wife.  I decided to target the open cluster Messier 7 and the rich starfields of the Milky Way behind it.  Over the weekend of July 4, I finally got around to processing the data.  It turned out to be quite a challenge because of the huge DECam images, the rich starfield including many severely saturated bright stars, and my desire to create images that were cosmetically as well as scientifically excellent.

I started with standard processing: dark subtraction and flatfielding of the data (itself non-trivial with DECam).  The next step was sky subtraction.  I have a good way of doing this for DECam images of ordinary star fields.  I fit the sky background of each image with a 2-dimensional polynomial, typically of order two or three.  I do the fitting iteratively, using customizable creeping-mean rejection of outlying points above and below the fit.  In dense starfields, the creeping mean can be set purposely assymetrical: e.g., to reject the brightest 20% of pixels and the faintest 10%.  A setting can usually be found that effectively rejects all the stars and accurately fits the sky.  With DECam, the upper and lower halves of each image have to be fit independently, because there is a bias offset between them that does not appear to be consistent from one image to the next.

But the starfields of the Milky Way behind Messier 7 are beyond dense.  There is no sky between the stars to measure.  My polynomial fits were hopeless.

I had anticipated this problem, however.  I made sure that I took several images of a much less dense field of photometric standard stars near the time I observed Messier 7.  These were 30 second exposures vs. the successive 1, 10, and 100 second exposures I used for Messier 7, but still I believed they would give the sky background I needed.  I extracted the background as follows.

First, I applied my regular program for subtracting iterative polynomial fits to the sky background on the standard star images.  Then I subtracted the new, background-cleaned version of each standard star image from the original version.  The result was an image consisting of only the smooth, polynomial model obtained for the sky background.  I obtained 5 to 6 of these images in each of the g, r, and i filters.  Then I normalized them all to have a mean of 1.0 on the DECam detector that I have adopted as my reference (detector N04), and median-stacked these normalized sky-model frames to produce a master sky-model frame for each filter.

I then identified tiny areas of very dark dust cloud in the Messier 7 images, which I believed preserved something close to the true sky background.  I measured the average pixel brightness in these regions and on the corresponding regions on the master sky-model frames.  The backgrounds on the 100-second images of Messier 7 were typically 250 ADU in the g filter, 700 in the r filter, and 1300 in the i filter -- thus, I could approximate the true sky background by multiplying the normalized master sky-model frames by approximately 250, 700, and 1300 respectively.  However, I did not trust my dark regions to be absolutely dark, and I did not want to produce negative pixel values anywhere.  Thus, I decided to scale the sky-model images to match 90%, rather than 100%, of the measured brightness in the dark-clouded regions of the science images.  This is equivalent to assuming that even in the dark, dusty regions, 10% of the light comes from unresolved stars or nebula and only 90% is true telluric sky background.  After scaling the sky model images, I of course subtracted them from the corresponding images of Messier 7.  I did not try to correct the 1 second exposures this way, only the 10 and 100 second exposures.  The sky brightness values measured for the 10 vs. 100-second exposures did differ from one another by about a factor of 10. While not in itself a proof that my method was working, this did constitute an encouraging sanity check.

I encountered a problem with the bias offsets between the upper and lower halves of each image.  These were not consistent between the standard star images used to calculate the sky background and the Messier 7 images.  Thus, the sky-subtracted Messier 7 images had discontinuities in mean brightness between the upper and lower halves of individual DECam detector images.  Reasoning that it was impossible to accurately measure or correct the bias offsets for the science frames, I simply wrote a program to remove the mean offsets between the upper and lower images halves in the master sky frames.  Thus, any bias offset remaining in the Messier 7 images would at least be intrinsic to those images and not exacerbated by the sky subtraction.  In fact, any such biases turned out to be negligible at least for the 100 second exposures.

I had 5 to 6 images of Messier 7 at each exposure in each filter.  I obtained an independent astrometric solution for each detector using stars from the 2MASS catalog (of which there were on average about 2600 yielding accurate measurements on each detector).  I then resampled the individual detector images onto a single astrometric grid spanning 32000x28000 pixels.  The individual resampled images were median-stacked, with the gaps between the images masked.  I had used a well-optimized dithering strategy, so no internal gaps remained near Messier 7 after the 5-6 images in each set had been stacked.

Bright, severely saturated stars in DECam images produce long horizontal streaks due to charge-bleeding.  Where one of these streaks extends to the edge of a detector, it affects the region below it by depressing the pixel values by a large and spatially varying additive constant.  I removed this effect by masking all negative pixels prior to the final stacks.  Even though this widened the effective gaps between detectors that had bright stars on them, it still did not create internal gaps in the final stacked images.

The starfields in the 100 second stacked images were impressive, but the long charge-bleed streaks from the bright stars were unsightly.  I attempted to remove them by replacing all saturated pixels in the 100 second stacked images with 10.0 times the corresponding pixel values in the 10-second exposure stacks.  This helped, but the resampling had caused the boundaries of saturated streaks not necessarily to have pixel values above the saturation threshold.  Thus, uncorrected outlines remained wherever there had been saturated bleed-streaks. Additionally in many cases the pixels immediately adjacent to the bleed streaks were seriously affected even though no charge had bled into them directly.  I dealt with these problems through a two-step approach.  First, I wrote a program to change the brightness of pixels adjacent to saturated pixels by setting them too above the saturation threshold, so they would be sure to be identified as deviant and then replaced.  I applied this program to the images prior to the astrometric resampling.  Second, I changed my astrometric resampling program so that if any of the four pixels adjacent to a given location was above the saturation threshold, it would assign the pixel value at that location in the resampled image using nearest-neighbor resampling rather than bilinear interpolation.  This preserved the hard edges of the bleed-streaks, preventing the streak-boundary issues previously caused by the resampling. The median-stacking could in some cases yield values of half the true saturation threshold for pixels at the edge of the bleed streaks.  This last problem was easily rectified by setting the nominal saturation threshold for pixel replacement to just under half the actual saturation threshold.

This finally produced acceptable results.  The saturated bleed-streaks from the 100-second images were effectively removed without noticeable residuals.  Much shorter bleed streaks remained around the very brightest stars, because these had been saturated even on the 10-second exposures.  I tried to correct them using data from the 1-second exposures, but it was too noisy and the background control was insufficient for these short, bias-dominated images.  This is not necessarily hopeless -- for example, these exposures might be so short that polynomial-fitting sky subtraction could work on them directly despite the richness of the starfield.  However, I decided my results were sufficiently good already -- in fact, the 10-second bleed streaks seem aesthetically desirable in some cases -- and I should not spend any more time on this problem.  I stacked the images in GIMP, using the color assignment i=R, r=G, and g=B.  The GIMP processing took hours longer than I expected due to GIMPs inefficiency on very large images.  However, at last an excellent-looking RGB image was produced.  I saved it as a JPEG and cropped away the ragged edges to yield a rectangular image.

Besides its potential public appeal, the fits files that went into this final image can be used to extract accurate photometry and astrometry from millions of stars.  Scientific potential for such an extraction includes the identification of faint, high proper-motion objects, including previously unknown, low-mass or ejected members of the open cluster Messier 7 itself.  I have no time to obtain photometry/astrometry on these images now, however, nor to process the u-band data that I also took at the same time (which may in fact have the greatest scientific value).  At present, however, I already have a beautiful picture of Messier 7, and one with future scientific potential.  Here is a woefully down-sampled version of it:

Thursday, July 3, 2014

Finishing up a photometric survey

Today I have finished reducing all of my photometric monitoring data for a survey of 12 T dwarfs in the red optical.  For every object, I have several hours of photometry on at least two different nights.  This is the first survey of its kind, because prior to this T dwarfs have mostly been observed in the near infrared. They are very faint in the red optical, but can have higher variability amplitudes.  At the beginning of this survey, I expected that none of the T dwarfs would be variable with amplitude above 10%.  This expectation turns out to be wrong.  One of them has a variability amplitude of nearly 20%.

I have been working on this project for several weeks, carefully optimizing the photometric aperture and sky subtraction method for each T dwarf, using the RMS scatter of photometry of field stars of similar brightness.  Optimal apertures tend to be smaller for fainter objects and for nights of better seeing, while brighter objects or nights of bad seeing require larger photometric apertures.  The optimal type of sky subtraction for the images as a whole can vary from no sky subtraction, to sky subtraction based on a single median-stack of normalized images from the entire night, to sky-subtraction based on multiple sky frames such as I described in my previous post.  It has been a surprise to me that the multi-frame sky subtraction did not always produce the best photometry.  There may be a number of reasons for this.  The shot noise is higher in sky frames created from an average of fewer images, and this puts the multi-frame approach at a disadvantage.  The masking of stars that I employ in this approach can (if the data are sparse or affected by clouds) introduce image artifacts that affect the photometry.  However I must say that although I think I understand the big picture of why the optimal aperture changes from night to night and from object to object, I do not feel that I have a similar understanding of why the optimal type of image-based sky subtraction varies. 

Whatever type of image-based sky subtraction I have used, I always additionally subtract the mean in an annulus around each object for which I obtain photometry (as is, of course, standard practice for aperture photometry in astronomy).

One of the last problems I had to deal with was the satellite streak in this image:


it wiped out two of the 54 stars I was using as photometric references on this frame.  My relative photometry algorithm is designed to deal with outliers, but this streak is so intense that it not only made the two stars that are behind it positive outliers -- it made all the other stars in the image negative outliers, since the outlier identification uses the ratio of each star to the summed fluxes of all the others.  So the satellite really wreaked havoc on this frame even though only two stars were directly affected.  I could have thrown out the frame, but that was a more serious loss of data than rejecting the two stars.  However, since I use a consistent set of reference stars across all nights, the rejection of two stars meant the re-doing of photometry on other nights where the same object was observed.  This is all done and finalized now, however.

What I am planning to do next is obtain absolute photometry for each object, just to have a reference-point for comparing fluxes.  I am also planning to stack photometric monitoring data from different T dwarfs on different nights as a function of airmass, to look for a systematic dependence of the relative brightness on airmass.  There should be a weak dependence due to the increased effect of telluric water vapor absorption on T dwarf photometry relative to stars (because the T dwarf flux is heavily weighted toward the red edge of the f814w filter I used for the observations, and that includes the telluric water feature near 950 nm).  The airmass effect is not visible in the photometry of individual T dwarfs, but if I can measure it in the aggregate over all the nights, it will give me a measure of the mean precipitable water vapor in the atmosphere during these observations.