177 research outputs found

    Neural network-based classification of X-ray fluorescence spectra of artists' pigments: an approach leveraging a synthetic dataset created using the fundamental parameters method

    Get PDF
    X-ray fluorescence (XRF) spectroscopy is an analytical technique used to identify chemical elements that has found widespread use in the cultural heritage sector to characterise artists' materials including the pigments in paintings. It generates a spectrum with characteristic emission lines relating to the elements present, which is interpreted by an expert to understand the materials therein. Convolutional neural networks (CNNs) are an effective method for automating such classification tasks—an increasingly important feature as XRF datasets continue to grow in size—but they require large libraries that capture the natural variation of each class for training. As an alternative to having to acquire such a large library of XRF spectra of artists' materials a physical model, the Fundamental Parameters (FP) method, was used to generate a synthetic dataset of XRF spectra representative of pigments typically encountered in Renaissance paintings that could then be used to train a neural network. The synthetic spectra generated—modelled as single layers of individual pigments—had characteristic element lines closely matching those found in real XRF spectra. However, as the method did not incorporate effects from the X-ray source, the synthetic spectra lacked the continuum and Rayleigh and Compton scatter peaks. Nevertheless, the network trained on the synthetic dataset achieved 100% accuracy when tested on synthetic XRF data. Whilst this initial network only attained 55% accuracy when tested on real XRF spectra obtained from reference samples, applying transfer learning using a small quantity of such real XRF spectra increased the accuracy to 96%. Due to these promising results, the network was also tested on select data acquired during macro XRF (MA-XRF) scanning of a painting to challenge the model with noisier spectra Although only tested on spectra from relatively simple paint passages, the results obtained suggest that the FP method can be used to create accurate synthetic XRF spectra of individual artists' pigments, free from X-ray tube effects, on which a classification model could be trained for application to real XRF data and that the method has potential to be extended to deal with more complex paint mixtures and stratigraphies

    Prognostic microRNA signatures derived from The Cancer Genome Atlas for head and neck squamous cell carcinomas

    Get PDF
    Identification of novel prognostic biomarkers typically requires a large dataset which provides sufficient statistical power for discovery research. To this end, we took advantage of the high‐throughput data from The Cancer Genome Atlas (TCGA) to identify a set of prognostic biomarkers in head and neck squamous cell carcinomas (HNSCC) including oropharyngeal squamous cell carcinoma (OPSCC) and other subtypes. In this study, we analyzed miRNA‐seq data obtained from TCGA patients to identify prognostic biomarkers for OPSCC. The identified miRNAs were further tested with an independent cohort. miRNA‐seq data from TCGA was also analyzed to identify prognostic miRNAs in oral cavity squamous cell carcinoma (OSCC) and laryngeal squamous cell carcinoma (LSCC). Our study identified that miR‐193b‐3p and miR‐455‐5p were positively associated with survival, and miR‐92a‐3p and miR‐497‐5p were negatively associated with survival in OPSCC. A combined expression signature of these four miRNAs was prognostic of overall survival in OPSCC, and more importantly, this signature was validated in an independent OPSCC cohort. Furthermore, we identified four miRNAs each in OSCC and LSCC that were prognostic of survival, and combined signatures were specific for subtypes of HNSCC. A robust 4‐miRNA prognostic signature in OPSCC, as well as prognostic signatures in other subtypes of HNSCC, was developed using sequencing data from TCGA as the primary source. This demonstrates the power of using TCGA as a potential resource to develop prognostic tools for improving individualized patient care

    Myeloablative vs Reduced-Intensity Conditioning Allogeneic Hematopoietic Cell Transplantation for Chronic Myeloid Leukemia

    Get PDF
    Allogeneic hematopoietic cell transplantation (allo-HCT) is a potentially curative treatment of chronic myeloid leukemia (CML). Optimal conditioning intensity for allo-HCT for CML in the era of tyrosine kinase inhibitors (TKIs) is unknown. Using the Center for International Blood and Marrow Transplant Research database, we sought to determine whether reduced-intensity/nonmyeloablative conditioning (RIC) allo-HCT and myeloablative conditioning (MAC) result in similar outcomes in CML patients. We evaluated 1395 CML allo-HCT recipients between the ages of 18 and 60 years. The disease status at transplant was divided into the following categories: chronic phase 1, chronic phase 2 or greater, and accelerated phase. Patients in blast phase at transplant and alternative donor transplants were excluded. The primary outcome was overall survival (OS) after allo-HCT. MAC (n = 1204) and RIC allo-HCT recipients (n = 191) from 2007 to 2014 were included. Patient, disease, and transplantation characteristics were similar, with a few exceptions. Multivariable analysis showed no significant difference in OS between MAC and RIC groups. In addition, leukemia-free survival and nonrelapse mortality did not differ significantly between the 2 groups. Compared with MAC, the RIC group had a higher risk of early relapse after allo-HCT (hazard ratio [HR], 1.85; P = .001). The cumulative incidence of chronic graft-versus-host disease (cGVHD) was lower with RIC than with MAC (HR, 0.77; P = .02). RIC provides similar survival and lower cGVHD compared with MAC and therefore may be a reasonable alternative to MAC for CML patients in the TKI era

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be ∌24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with ÎŽ<+34.5∘\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r∌27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Exome Sequencing in Suspected Monogenic Dyslipidemias

    Get PDF
    Abstract BACKGROUND: -Exome sequencing is a promising tool for gene mapping in Mendelian disorders. We utilized this technique in an attempt to identify novel genes underlying monogenic dyslipidemias. METHODS AND RESULTS: -We performed exome sequencing on 213 selected family members from 41 kindreds with suspected Mendelian inheritance of extreme levels of low-density lipoprotein (LDL) cholesterol (after candidate gene sequencing excluded known genetic causes for high LDL cholesterol families) or high-density lipoprotein (HDL) cholesterol. We used standard analytic approaches to identify candidate variants and also assigned a polygenic score to each individual in order to account for their burden of common genetic variants known to influence lipid levels. In nine families, we identified likely pathogenic variants in known lipid genes (ABCA1, APOB, APOE, LDLR, LIPA, and PCSK9); however, we were unable to identify obvious genetic etiologies in the remaining 32 families despite follow-up analyses. We identified three factors that limited novel gene discovery: (1) imperfect sequencing coverage across the exome hid potentially causal variants; (2) large numbers of shared rare alleles within families obfuscated causal variant identification; and (3) individuals from 15% of families carried a significant burden of common lipid-related alleles, suggesting complex inheritance can masquerade as monogenic disease. CONCLUSIONS: -We identified the genetic basis of disease in nine of 41 families; however, none of these represented novel gene discoveries. Our results highlight the promise and limitations of exome sequencing as a discovery technique in suspected monogenic dyslipidemias. Considering the confounders identified may inform the design of future exome sequencing studies

    Irish cardiac society - Proceedings of annual general meeting held 20th & 21st November 1992 in Dublin Castle

    Get PDF

    Distribution and medical impact of loss-of-function variants in the Finnish founder population.

    Get PDF
    Exome sequencing studies in complex diseases are challenged by the allelic heterogeneity, large number and modest effect sizes of associated variants on disease risk and the presence of large numbers of neutral variants, even in phenotypically relevant genes. Isolated populations with recent bottlenecks offer advantages for studying rare variants in complex diseases as they have deleterious variants that are present at higher frequencies as well as a substantial reduction in rare neutral variation. To explore the potential of the Finnish founder population for studying low-frequency (0.5-5%) variants in complex diseases, we compared exome sequence data on 3,000 Finns to the same number of non-Finnish Europeans and discovered that, despite having fewer variable sites overall, the average Finn has more low-frequency loss-of-function variants and complete gene knockouts. We then used several well-characterized Finnish population cohorts to study the phenotypic effects of 83 enriched loss-of-function variants across 60 phenotypes in 36,262 Finns. Using a deep set of quantitative traits collected on these cohorts, we show 5 associations (p<5×10⁻⁞) including splice variants in LPA that lowered plasma lipoprotein(a) levels (P = 1.5×10⁻ÂčÂč⁷). Through accessing the national medical records of these participants, we evaluate the LPA finding via Mendelian randomization and confirm that these splice variants confer protection from cardiovascular disease (OR = 0.84, P = 3×10⁻⁎), demonstrating for the first time the correlation between very low levels of LPA in humans with potential therapeutic implications for cardiovascular diseases. More generally, this study articulates substantial advantages for studying the role of rare variation in complex phenotypes in founder populations like the Finns and by combining a unique population genetic history with data from large population cohorts and centralized research access to National Health Registers
    • 

    corecore