273 research outputs found

    Data trend mining for predictive systems design

    Get PDF
    The goal of this research is to propose a data mining based design framework that can be used to solve complex systems design problems in a timely and efficient manner, with the main focus being product family design problems. Traditional data acquisition techniques that have been employed in the product design community have relied primarily on customer survey data or focus group feedback as a means of integrating customer preference information into the product design process. The reliance of direct customer interaction can be costly and time consuming and may therefore limit the overall size and complexity of the customer preference data. Furthermore, since survey data typically represents stated customer preferences (customer responses for hypothetical product designs, rather than actual product purchasing decisions made), design engineers may not know the true customer preferences for specific product attributes, a challenge that could ultimately result in misguided product designs. By analyzing large scale time series consumer data, new products can be designed that anticipate emerging product preference trends in the market space. The proposed data trend mining algorithm will enable design engineers to determine how to characterize attributes based on their relevance to the overall product design. A cell phone case study is used to demonstrate product design problems involving new product concept generation and an aerodynamic particle separator case study is presented for product design problems requiring attribute relevance characterization and product family clustering. Finally, it is shown that the proposed trend mining methodology can be expanded beyond product design problems to include systems of systems design problems such as military systems simulations

    On the Ground Validation of Online Diagnosis with Twitter and Medical Records

    Full text link
    Social media has been considered as a data source for tracking disease. However, most analyses are based on models that prioritize strong correlation with population-level disease rates over determining whether or not specific individual users are actually sick. Taking a different approach, we develop a novel system for social-media based disease detection at the individual level using a sample of professionally diagnosed individuals. Specifically, we develop a system for making an accurate influenza diagnosis based on an individual's publicly available Twitter data. We find that about half (17/35 = 48.57%) of the users in our sample that were sick explicitly discuss their disease on Twitter. By developing a meta classifier that combines text analysis, anomaly detection, and social network analysis, we are able to diagnose an individual with greater than 99% accuracy even if she does not discuss her health.Comment: Presented at of WWW2014. WWW'14 Companion, April 7-11, 2014, Seoul, Kore

    On the Ground Validation of Online Diagnosis with Twitter and Medical Records

    Full text link
    Social media has been considered as a data source for tracking disease. However, most analyses are based on models that prioritize strong correlation with population-level disease rates over determining whether or not specific individual users are actually sick. Taking a different approach, we develop a novel system for social-media based disease detection at the individual level using a sample of professionally diagnosed individuals. Specifically, we develop a system for making an accurate influenza diagnosis based on an individual's publicly available Twitter data. We find that about half (17/35 = 48.57%) of the users in our sample that were sick explicitly discuss their disease on Twitter. By developing a meta classifier that combines text analysis, anomaly detection, and social network analysis, we are able to diagnose an individual with greater than 99% accuracy even if she does not discuss her health.Comment: Presented at of WWW2014. WWW'14 Companion, April 7-11, 2014, Seoul, Kore

    Constructing information experience: a grounded theory portrait of academic information management

    Get PDF
    Purpose This paper aims to discuss what it means to consider the information experience of academic information management from a constructivist grounded theory perspective. Using a doctoral study in progress as a case illustration, the authors demonstrate how information experience research applies a wide lens to achieve a holistic view of information management phenomena. By unifying a range of elements, and understanding information and its management to be inseparable from the totality of human experience, an information experience perspective offers a fresh approach to answering today\u27s research questions. Design/methodology/approach The case illustration is a constructivist grounded theory study using interactive interviews, an original form of semi-structured qualitative interviews combined with card-sorting exercises (Conrad and Tucker, 2019), to deepen reflections by participants and externalize their information experiences. The constructivist variant of grounded theory offers an inductive, exploratory approach to address the highly contextualized information experiences of student-researchers in managing academic information. Findings Preliminary results are reported in the form of three interpretative categories that outline the key aspects of the information experience for student-researchers. By presenting these initial results, the study demonstrates how the constructivist grounded theory methodology can illuminate multiple truths and bring a focus on interpretive practices to the understanding of information management experiences. Research limitations/implications This new approach offers holistic insights into academic information management phenomena as contextual, fluid and informed by meaning-making and adaptive practices. Limitations include the small sample size customary to qualitative research, within one situated perspective on the academic information management experience. Originality/value The study demonstrates the theoretical and methodological contributions of the constructivist information experience research to illuminate information management in an academic setting

    Data-driven optimization of dynamic reconfigurable systems of systems.

    Get PDF
    This report documents the results of a Strategic Partnership (aka University Collaboration) LDRD program between Sandia National Laboratories and the University of Illinois at Urbana-Champagne. The project is titled 'Data-Driven Optimization of Dynamic Reconfigurable Systems of Systems' and was conducted during FY 2009 and FY 2010. The purpose of this study was to determine and implement ways to incorporate real-time data mining and information discovery into existing Systems of Systems (SoS) modeling capabilities. Current SoS modeling is typically conducted in an iterative manner in which replications are carried out in order to quantify variation in the simulation results. The expense of many replications for large simulations, especially when considering the need for optimization, sensitivity analysis, and uncertainty quantification, can be prohibitive. In addition, extracting useful information from the resulting large datasets is a challenging task. This work demonstrates methods of identifying trends and other forms of information in datasets that can be used on a wide range of applications such as quantifying the strength of various inputs on outputs, identifying the sources of variation in the simulation, and potentially steering an optimization process for improved efficiency

    Mutability and mutational spectrum of chromosome transmission fidelity genes

    Get PDF
    It has been more than two decades since the original chromosome transmission fidelity (Ctf) screen of Saccharomyces cerevisiae was published. Since that time the spectrum of mutations known to cause Ctf and, more generally, chromosome instability (CIN) has expanded dramatically as a result of systematic screens across yeast mutant arrays. Here we describe a comprehensive summary of the original Ctf genetic screen and the cloning of the remaining complementation groups as efforts to expand our knowledge of the CIN gene repertoire and its mutability in a model eukaryote. At the time of the original screen, it was impossible to predict either the genes and processes that would be overrepresented in a pool of random mutants displaying a Ctf phenotype or what the entire set of genes potentially mutable to Ctf would be. We show that in a collection of 136 randomly selected Ctf mutants, >65% of mutants map to 13 genes, 12 of which are involved in sister chromatid cohesion and/or kinetochore function. Extensive screening of systematic mutant collections has shown that ~350 genes with functions as diverse as RNA processing and proteasomal activity mutate to cause a Ctf phenotype and at least 692 genes are required for faithful chromosome segregation. The enrichment of random Ctf alleles in only 13 of ~350 possible Ctf genes suggests that these genes are more easily mutable to cause genome instability than the others. These observations inform our understanding of recurring CIN mutations in human cancers where presumably random mutations are responsible for initiating the frequently observed CIN phenotype of tumors

    Stellar kinematics and metallicities in the ultra-faint dwarf galaxy Reticulum II

    Get PDF
    Based on data obtained from the ESO Science Archive Facility under request number 157689

    Sediment routing and basin evolution in Proterozoic to Mesozoic east Gondwana: A case study from southern Australia

    Get PDF
    Sedimentary rocks along the southern margin of Australia host an important record of the break-up history of east Gondwana, as well as fragments of a deeper geological history, which collectively help inform the geological evolution of a vast and largely underexplored region. New drilling through Cenozoic cover has allowed examination of the Cretaceous rift-related Madura Shelf sequence (Bight Basin), and identification of two new stratigraphic units beneath the shelf; the possibly Proterozoic Shanes Dam Conglomerate and the interpreted Palaeozoic southern Officer Basin unit, the Decoration Sandstone. Recognition of these new units indicates an earlier basinal history than previously known. Lithostratigraphy of the new drillcore has been integrated with that published from onshore and offshore cores to present isopach maps of sedimentary cover on the Madura Shelf. New palynological data demonstrate progression from more localised freshwater-brackish fluvio-lacustrine clastics in the early Cretaceous (Foraminisporis wonthaggiensis – Valanginian to Barremian) to widespread topography-blanketing, fully marine, glauconitic mudrocks in the mid Cretaceous (Endoceratium ludbrookiae – Albian). Geochronology and Hf-isotope geochemistry show detrital zircon populations from the Madura Shelf are comparable to those from the southern Officer Basin, as well as Cenozoic shoreline and palaeovalley sediments in the region. The detrital zircon population from the Shanes Dam Conglomerate is defined by a unimodal ~1400 Ma peak, which correlates with directly underlying crystalline basement of the Madura Province. Peak ages of ~1150 Ma and ~1650 Ma dominate the age spectra of all other samples, indicating a stable sediment reservoir through much of the Phanerozoic, with sediments largely sourced from the Albany-Fraser Orogen and Musgrave Province (directly and via multiple recycling events). The Madura Shelf detrital zircon population differs from published data for the Upper CretaceousCeduna Delta to the east, indicating significant differences in sediment provenance and routing between the Ceduna Sub-basin and central Bight Basin

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
    corecore