191 research outputs found

    Preprocessing Solar Images while Preserving their Latent Structure

    Get PDF
    Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory, a NASA satellite, collect massive streams of high resolution images of the Sun through multiple wavelength filters. Reconstructing pixel-by-pixel thermal properties based on these images can be framed as an ill-posed inverse problem with Poisson noise, but this reconstruction is computationally expensive and there is disagreement among researchers about what regularization or prior assumptions are most appropriate. This article presents an image segmentation framework for preprocessing such images in order to reduce the data volume while preserving as much thermal information as possible for later downstream analyses. The resulting segmented images reflect thermal properties but do not depend on solving the ill-posed inverse problem. This allows users to avoid the Poisson inverse problem altogether or to tackle it on each of ∼\sim10 segments rather than on each of ∼\sim107^7 pixels, reducing computing time by a factor of ∼\sim106^6. We employ a parametric class of dissimilarities that can be expressed as cosine dissimilarity functions or Hellinger distances between nonlinearly transformed vectors of multi-passband observations in each pixel. We develop a decision theoretic framework for choosing the dissimilarity that minimizes the expected loss that arises when estimating identifiable thermal properties based on segmented images rather than on a pixel-by-pixel basis. We also examine the efficacy of different dissimilarities for recovering clusters in the underlying thermal properties. The expected losses are computed under scientifically motivated prior distributions. Two simulation studies guide our choices of dissimilarity function. We illustrate our method by segmenting images of a coronal hole observed on 26 February 2015

    Influence of Coronal Abundance Variations

    Get PDF
    The PI of this project was Jeff Scargle of NASA/Ames. Co-I's were Alma Connors of Eureka Scientific/Wellesley, and myself. Part of the work was subcontracted to Eureka Scientific via SAO, with Vinay Kashyap as PI. This project was originally assigned grant number NCC2-1206, and was later changed to NCC2-1350 for administrative reasons. The goal of the project was to obtain, derive, and develop statistical and data analysis tools that would be of use in the analyses of high-resolution, high-sensitivity data that are becoming available with new instruments. This is envisioned as a cross-disciplinary effort with a number of "collaborators" including some at SA0 (Aneta Siemiginowska, Peter Freeman) and at the Harvard Statistics department (David van Dyk, Rostislav Protassov, Xiao-li Meng, Epaminondas Sourlas, et al). We have developed a new tool to reliably measure the metallicities of thermal plasma. It is unfeasible to obtain high-resolution grating spectra for most stars, and one must make the best possible determination based on lower-resolution, CCD-type spectra. It has been noticed that most analyses of such spectra have resulted in measured metallicities that were significantly lower than when compared with analyses of high- resolution grating data where available (see, e.g., Brickhouse et al., 2000, ApJ 530,387). Such results have led to the proposal of the existence of so-called Metal Abundance Deficient, or "MAD" stars (e.g., Drake, J.J., 1996, Cool Stars 9, ASP Conf.Ser. 109, 203). We however find that much of these analyses may be systematically underestimating the metallicities, and using a newly developed method to correctly treat the low-counts regime at the high-energy tail of the stellar spectra (van Dyk et al. 2001, ApJ 548,224), have found that the metallicities of these stars are generally comparable to their photospheric values. The results were reported at the AAS (Sourlas, Yu, van Dyk, Kashyap, and Drake, 2000, BAAS 196, v32, #54.02), and at the conference on Statistical Challenges in Modem Astronomy (Sourlas, van Dyk, Kashyap, Drake, and Pease, 2003, SCMA 111, Eds. E.D.Feigelson, G.J.Babu, New York:Springer, p489-490). We also described the limitations of one of the most egregiously misused and misapplied statistical tests in astrophysical literature, the F-test for verifying model components (Protassov, van Dyk, Connors, Kashyap, and Siemiginowska, 2002, ApJ, 571,545). Indeed, a search through the ApJ archives turned up 170 papers in the 5 previous years that used the F-test explicitly in some form or the other, and with the vast majority of them not using it correctly! Indeed, looking at just 4 issues of the ApJ in 2001, we found 13 instances of its use, of which nine were demonstrably incorrect. Clearly, it is difficult to understate the importance of this issue. We also worked on speeding up Bayes Blocks and Sparse Bayes Blocks algorithms to make them more tractable for large searches. We also supported staistics students and postdocs in both explicit physics- model-based (spectra with tens of thousands of atomic lines) and "model-free" -- i.e. non-parametric or semi-parametric -- algorithms. Work on using more of the latter is just beginning; while using multi-scale methods for Poisson imaging has come to hition. In fact, "An Image Restoration Technique with Error Estimates", by D. Esch, A. Connors, M. Karovska, and D. van Dyk, was published by ApJ (Esch et a1.2004, ApJ, 610, 1213). The code has been delivered to M. Karovska for CXC; and is available for beta-testing upon request. The other large project we worked on was on the self-consistent modeling of logN-logs curves in the Poisson limit. logN-logs curves are a fundamental tool in the study of source populations, luminosity functions, and cosmological parameters. However, their determination is hampered by statistical effects such as the Eddington bias, incompleteness due to detection efficiency, faint source flux fluctuations, etc. We have develed a new and powerful method using the full Poisson machinery that allows us to model the logN-logs distribution of X-ray sources in a self-consistent manner. Because we properly account for all the above statistical effects, our modeling is valid over the full range of the data, and not just for strong sources, as is normally done. Using a Bayesian approach and modeling the fluxes with known functional forms such as simple or broken power-laws, and conditioning the expected photon counts on the fluxes, the background contamination, effective area, detector vignetting, and detection probability, we can delve deeply into the low counts regime and extend the usefulness of medium sensitivity surveys such as ChAMP by orders of magnitude. The built-in flexibility of the algorithm also allows a simultaneous analysis of multiple datasets. We have applied this analysis to a set a Chandra observations (Sourlas, Kashyap, Zezas, van Dyk, 2004, HEAD #8, #16.32

    Analyze and study the impacts of different packages on static and dynamic IR Drop analysis on different Infineon designs.

    Get PDF
    Dynamic voltage drop depends on the switching activity of the logic compared to static IR drop, and hence it is a vector dependent concept. In this paper we have highlighted the methodology of extraction and modeling of package along with the chip-package static IR drop as well as dynamic IR drop analysis scenarios.A proper structured approach to analyze the impact of package parasitics onto the die is presented, with an emphasis to cover different corners in which IR analysis is impacted, and how it can be implemented in the design cycle. Finally, the impact of package on chip is studied by considering the histogram plots obtained from dynamic IR numbers. Later using all the numbers & plots impact of different packages on chip is realized.Results are from acquired from industrial designs in 65nm process related to the said topics. DOI: 10.17762/ijritcc2321-8169.15055

    Detecting Unspecified Structure in Low-Count Images

    Full text link
    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tail probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet
    • …
    corecore