1,191 research outputs found

    Nonparametric Transient Classification using Adaptive Wavelets

    Full text link
    Classifying transients based on multi band light curves is a challenging but crucial problem in the era of GAIA and LSST since the sheer volume of transients will make spectroscopic classification unfeasible. Here we present a nonparametric classifier that uses the transient's light curve measurements to predict its class given training data. It implements two novel components: the first is the use of the BAGIDIS wavelet methodology - a characterization of functional data using hierarchical wavelet coefficients. The second novelty is the introduction of a ranked probability classifier on the wavelet coefficients that handles both the heteroscedasticity of the data in addition to the potential non-representativity of the training set. The ranked classifier is simple and quick to implement while a major advantage of the BAGIDIS wavelets is that they are translation invariant, hence they do not need the light curves to be aligned to extract features. Further, BAGIDIS is nonparametric so it can be used for blind searches for new objects. We demonstrate the effectiveness of our ranked wavelet classifier against the well-tested Supernova Photometric Classification Challenge dataset in which the challenge is to correctly classify light curves as Type Ia or non-Ia supernovae. We train our ranked probability classifier on the spectroscopically-confirmed subsample (which is not representative) and show that it gives good results for all supernova with observed light curve timespans greater than 100 days (roughly 55% of the dataset). For such data, we obtain a Ia efficiency of 80.5% and a purity of 82.4% yielding a highly competitive score of 0.49 whilst implementing a truly "model-blind" approach to supernova classification. Consequently this approach may be particularly suitable for the classification of astronomical transients in the era of large synoptic sky surveys.Comment: 14 pages, 8 figures. Published in MNRA

    Towards the Future of Supernova Cosmology

    Full text link
    For future surveys, spectroscopic follow-up for all supernovae will be extremely difficult. However, one can use light curve fitters, to obtain the probability that an object is a Type Ia. One may consider applying a probability cut to the data, but we show that the resulting non-Ia contamination can lead to biases in the estimation of cosmological parameters. A different method, which allows the use of the full dataset and results in unbiased cosmological parameter estimation, is Bayesian Estimation Applied to Multiple Species (BEAMS). BEAMS is a Bayesian approach to the problem which includes the uncertainty in the types in the evaluation of the posterior. Here we outline the theory of BEAMS and demonstrate its effectiveness using both simulated datasets and SDSS-II data. We also show that it is possible to use BEAMS if the data are correlated, by introducing a numerical marginalisation over the types of the objects. This is largely a pedagogical introduction to BEAMS with references to the main BEAMS papers.Comment: Replaced under married name Lochner (formally Knights). 3 pages, 2 figures. To appear in the Proceedings of 13th Marcel Grossmann Meeting (MG13), Stockholm, Sweden, 1-7 July 201

    Extending BEAMS to incorporate correlated systematic uncertainties

    Get PDF
    New supernova surveys such as the Dark Energy Survey, Pan-STARRS and the LSST will produce an unprecedented number of photometric supernova candidates, most with no spectroscopic data. Avoiding biases in cosmological parameters due to the resulting inevitable contamination from non-Ia supernovae can be achieved with the BEAMS formalism, allowing for fully photometric supernova cosmology studies. Here we extend BEAMS to deal with the case in which the supernovae are correlated by systematic uncertainties. The analytical form of the full BEAMS posterior requires evaluating 2^N terms, where N is the number of supernova candidates. This `exponential catastrophe' is computationally unfeasible even for N of order 100. We circumvent the exponential catastrophe by marginalising numerically instead of analytically over the possible supernova types: we augment the cosmological parameters with nuisance parameters describing the covariance matrix and the types of all the supernovae, \tau_i, that we include in our MCMC analysis. We show that this method deals well even with large, unknown systematic uncertainties without a major increase in computational time, whereas ignoring the correlations can lead to significant biases and incorrect credible contours. We then compare the numerical marginalisation technique with a perturbative expansion of the posterior based on the insight that future surveys will have exquisite light curves and hence the probability that a given candidate is a Type Ia will be close to unity or zero, for most objects. Although this perturbative approach changes computation of the posterior from a 2^N problem into an N^2 or N^3 one, we show that it leads to biases in general through a small number of misclassifications, implying that numerical marginalisation is superior.Comment: Resubmitted under married name Lochner (formally Knights). Version 3: major changes, including a large scale analysis with thousands of MCMC chains. Matches version published in JCAP. 23 pages, 8 figure

    Experimental Study of the Influence of Tool Geometry by Optimizing Helix Angle in the Peripheral Milling Operation using Taguchi based Grey Relational Analysis

    Full text link
    Tool selection is a critical part during manufacturing process. The tool geometry plays a vital role in the art of machining to produce the part to meet the quality requirements. The tool parameters which play major roles are tool material, tool geometry, size of the tool and coating of the tool. Out of these, selection of right kind of tool geometry plays a major role by reducing cutting forces and induced stresses, energy consumptions and temperature. All this will leads to reduced distortions and the selection of wrong tool geometry results in enhanced tool cost and loss in production. However these tool geometric features are often neglected during machining considerations and procurement of tools. Thus the objective of the study is to analyze the contribution of tool geometry in peripheral milling operation and to find the optimized helix angle to get minimum cutting force (useful in thin wall machining) and thereby ensuring perpendicularity and best surface finish to reduce the chatter vibration and deflection by optimizing the machining parameters such as spindle speed, feed per tooth and side cut. The experiments are conducted on CNC milling machine on aluminium alloy 2014 using solid carbide end mills of 10 mm diameter with various helix angles by making all other geometric features constant. Taguchi method is used for design of experiment. The optimum level of parameters has been identified using Grey relational analysis (GRA) and also the percentage contribution is identified using ANOVA

    Astrophysical S_{17}(0) factor from a measurement of d(7Be,8B)n reaction at E_{c.m.} = 4.5 MeV

    Full text link
    Angular distribution measurements of 2^2H(7^7Be,7^7Be)2^2H and 2^2H(7^7Be,8^8B)nn reactions at Ec.m.E_{c.m.}\sim~4.5 MeV were performed to extract the astrophysical S17(0)S_{17}(0) factor using the asymptotic normalization coefficient (ANC) method. For this purpose a pure, low emittance 7^7Be beam was separated from the primary 7^7Li beam by a recoil mass spectrometer operated in a novel mode. A beam stopper at 0^{\circ} allowed the use of a higher 7^7Be beam intensity. Measurement of the elastic scattering in the entrance channel using kinematic coincidence, facilitated the determination of the optical model parameters needed for the analysis of the transfer data. The present measurement significantly reduces errors in the extracted 7^7Be(p,γ\gamma) cross section using the ANC method. We get S17S_{17}~(0)~=~20.7~±\pm~2.4 eV~b.Comment: 15 pages including 3 eps figures, one figure removed and discussions updated. Version to appear in Physical Review

    Static Quark Potential from the Polyakov Sum over Surfaces

    Full text link
    Using the Polyakov string ansatz for the rectangular Wilson loop we calculate the static potential in the semiclassical approximation. Our results lead to a well defined sum over surfaces in the range 1<d<251<d<25.Comment: 17 pages, (with a TeX error on the title page corrected - nothing else changed

    Results from the Supernova Photometric Classification Challenge

    Get PDF
    We report results from the Supernova Photometric Classification Challenge (SNPCC), a publicly released mix of simulated supernovae (SNe), with types (Ia, Ibc, and II) selected in proportion to their expected rate. The simulation was realized in the griz filters of the Dark Energy Survey (DES) with realistic observing conditions (sky noise, point-spread function and atmospheric transparency) based on years of recorded conditions at the DES site. Simulations of non-Ia type SNe are based on spectroscopically confirmed light curves that include unpublished non-Ia samples donated from the Carnegie Supernova Project (CSP), the Supernova Legacy Survey (SNLS), and the Sloan Digital Sky Survey-II (SDSS-II). A spectroscopically confirmed subset was provided for training. We challenged scientists to run their classification algorithms and report a type and photo-z for each SN. Participants from 10 groups contributed 13 entries for the sample that included a host-galaxy photo-z for each SN, and 9 entries for the sample that had no redshift information. Several different classification strategies resulted in similar performance, and for all entries the performance was significantly better for the training subset than for the unconfirmed sample. For the spectroscopically unconfirmed subset, the entry with the highest average figure of merit for classifying SNe~Ia has an efficiency of 0.96 and an SN~Ia purity of 0.79. As a public resource for the future development of photometric SN classification and photo-z estimators, we have released updated simulations with improvements based on our experience from the SNPCC, added samples corresponding to the Large Synoptic Survey Telescope (LSST) and the SDSS, and provided the answer keys so that developers can evaluate their own analysis.Comment: accepted by PAS

    RETROSPECTIVE STUDY ON ASSESSMENT OF FACTORS ASSOCIATED WITH HYSTERECTOMIZED PATIENTS

    Get PDF
    ABSTRACTObjective: The study was carried out to determine the age pattern, indications, risk factors, co-morbid conditions, type of surgery, and the associatedcomplications of hysterectomized patients.Methods: A cross-sectional, retrospective study was done over a period of 8-month. A total of 150 hysterectomies were documented.Results: All data collected were analyzed and in this 111 hysterectomies (74%) were done by vaginal route and 39 (26%) done abdominally. In which,50.60% hysterectomized patients were in the age group 40-50 years and 32% patients were in 50-60 years. Out 111 vaginal hysterectomy (VH), 79 arelaparoscopic-assisted VH. Fibroid (33.33%) and dysfunctional uterine bleeding (DUB) (17%) were the most common indication. Complications wereinjuries to the bladder in 5 patients, 4 wound sepsis, 2 chronic cervicitis, 1 menopausal symptoms, 1 thrombocytosis, bilateral mild hydronephrosisin 1,and right ovarian cyst in 1 patient. This study also recorded the common co-morbid medical conditions in women undergoing hysterectomies includediabetes mellitus, 30.66%; hypertension, 23.33%; hypothyroid, 19.33%; dyslipidemia, 12%.Conclusion: The present study shows that the most common method performed is VH. Fibroid and DUB are very common indications for undergoinghysterectomy; of these, most of the hysterectomized patients were in the age group of 40-50 years. Laparoscopic hysterectomy may be an alternativeto abdominal hysterectomy for those patients in whom a VH is not indicated. All women should be carefully evaluated before surgery, and its route isdecided.Keywords: Hysterectomy, Fibroid, Dysfunctional uterine bleeding, Complications

    Work disability and state benefit claims in early rheumatoid arthritis: the ERAN cohort

    Get PDF
    Objective. RA is an important cause of work disability. This study aimed to identify predictive factors for work disability and state benefit claims in a cohort with early RA. Methods. The Early RA Network (ERAN) inception cohort recruited from 22 centres. At baseline, and during each annual visit, participants (n = 1235) reported employment status and benefits claims and how both were influenced by RA. Survival analysis derived adjusted hazard ratios (aHRs) and 95% CIs to predict associations between baseline factors and time until loss of employment due to RA or a state benefits claim due to RA. Results. At baseline, 47% of participants were employed and 17% reported claiming benefits due to RA. During follow-up, loss of employment due to RA was reported by 10% (49/475) of the participants and 20% (179/905) began to claim benefits. Independent predictors of earlier work disability were bodily pain (aHR 2.45, 95% CI 1.47, 4.08, P = 0.001) and low vitality (aHR 1.84, 95% CI 1.18, 2.85, P = 0.007). Disability (aHR 1.28, 95% CI 1.02, 1.61, P = 0.033), DAS28 (aHR 1.48, 95% CI 1.05, 2.09, P = 0.026) and extra-articular disease (aHR 1.77, 95% CI 1.17, 2.70, P = 0.007) predicted earlier benefits claims. Conclusion. Work disability and benefits claims due to RA were predicted by different baseline factors. Pain and low vitality predicted work disability. Baseline disability, extra-articular disease manifestations and disease activity predicted new benefits claims due to RA. Future research on interventions targeting these factors could investigate job retention and financial independence

    Recognition of Interaction Interface Residues in Low-Resolution Structures of Protein Assemblies Solely from the Positions of Cα Atoms

    Get PDF
    Background: The number of available structures of large multi-protein assemblies is quite small. Such structures provide phenomenal insights on the organization, mechanism of formation and functional properties of the assembly. Hence detailed analysis of such structures is highly rewarding. However, the common problem in such analyses is the low resolution of these structures. In the recent times a number of attempts that combine low resolution cryo-EM data with higher resolution structures determined using X-ray analysis or NMR or generated using comparative modeling have been reported. Even in such attempts the best result one arrives at is the very course idea about the assembly structure in terms of trace of the C alpha atoms which are modeled with modest accuracy. Methodology/Principal Findings: In this paper first we present an objective approach to identify potentially solvent exposed and buried residues solely from the position of C alpha atoms and amino acid sequence using residue type-dependent thresholds for accessible surface areas of C alpha. We extend the method further to recognize potential protein-protein interface residues. Conclusion/Significance: Our approach to identify buried and exposed residues solely from the positions of C alpha atoms resulted in an accuracy of 84%, sensitivity of 83-89% and specificity of 67-94% while recognition of interfacial residues corresponded to an accuracy of 94%, sensitivity of 70-96% and specificity of 58-94%. Interestingly, detailed analysis of cases of mismatch between recognition of interface residues from C alpha positions and all-atom models suggested that, recognition of interfacial residues using C alpha atoms only correspond better with intuitive notion of what is an interfacial residue. Our method should be useful in the objective analysis of structures of protein assemblies when positions of only C alpha positions are available as, for example, in the cases of integration of cryo-EM data and high resolution structures of the components of the assembly
    corecore