48,020 research outputs found

    Integrated controls and health monitoring for chemical transfer propulsion

    Get PDF
    NASA is reviewing various propulsion technologies for exploring space. The requirements are examined for one enabling propulsion technology: Integrated Controls and Health Monitoring (ICHM) for Chemical Transfer Propulsion (CTP). Functional requirements for a CTP-ICHM system are proposed from tentative mission scenarios, vehicle configurations, CTP specifications, and technical feasibility. These CTP-ICHM requirements go beyond traditional reliable operation and emergency shutoff control to include: (1) enhanced mission flexibility; (2) continuously variable throttling; (3) tank-head start control; (4) automated prestart and post-shutoff engine check; (5) monitoring of space exposure degradation; and (6) product evolution flexibility. Technology development plans are also discussed

    Robust sound event detection in bioacoustic sensor networks

    Full text link
    Bioacoustic sensors, sometimes known as autonomous recording units (ARUs), can record sounds of wildlife over long periods of time in scalable and minimally invasive ways. Deriving per-species abundance estimates from these sensors requires detection, classification, and quantification of animal vocalizations as individual acoustic events. Yet, variability in ambient noise, both over time and across sensors, hinders the reliability of current automated systems for sound event detection (SED), such as convolutional neural networks (CNN) in the time-frequency domain. In this article, we develop, benchmark, and combine several machine listening techniques to improve the generalizability of SED models across heterogeneous acoustic environments. As a case study, we consider the problem of detecting avian flight calls from a ten-hour recording of nocturnal bird migration, recorded by a network of six ARUs in the presence of heterogeneous background noise. Starting from a CNN yielding state-of-the-art accuracy on this task, we introduce two noise adaptation techniques, respectively integrating short-term (60 milliseconds) and long-term (30 minutes) context. First, we apply per-channel energy normalization (PCEN) in the time-frequency domain, which applies short-term automatic gain control to every subband in the mel-frequency spectrogram. Secondly, we replace the last dense layer in the network by a context-adaptive neural network (CA-NN) layer. Combining them yields state-of-the-art results that are unmatched by artificial data augmentation alone. We release a pre-trained version of our best performing system under the name of BirdVoxDetect, a ready-to-use detector of avian flight calls in field recordings.Comment: 32 pages, in English. Submitted to PLOS ONE journal in February 2019; revised August 2019; published October 201

    Computing server power modeling in a data center: survey,taxonomy and performance evaluation

    Full text link
    Data centers are large scale, energy-hungry infrastructure serving the increasing computational demands as the world is becoming more connected in smart cities. The emergence of advanced technologies such as cloud-based services, internet of things (IoT) and big data analytics has augmented the growth of global data centers, leading to high energy consumption. This upsurge in energy consumption of the data centers not only incurs the issue of surging high cost (operational and maintenance) but also has an adverse effect on the environment. Dynamic power management in a data center environment requires the cognizance of the correlation between the system and hardware level performance counters and the power consumption. Power consumption modeling exhibits this correlation and is crucial in designing energy-efficient optimization strategies based on resource utilization. Several works in power modeling are proposed and used in the literature. However, these power models have been evaluated using different benchmarking applications, power measurement techniques and error calculation formula on different machines. In this work, we present a taxonomy and evaluation of 24 software-based power models using a unified environment, benchmarking applications, power measurement technique and error formula, with the aim of achieving an objective comparison. We use different servers architectures to assess the impact of heterogeneity on the models' comparison. The performance analysis of these models is elaborated in the paper

    Is the Stack Distance Between Test Case and Method Correlated With Test Effectiveness?

    Full text link
    Mutation testing is a means to assess the effectiveness of a test suite and its outcome is considered more meaningful than code coverage metrics. However, despite several optimizations, mutation testing requires a significant computational effort and has not been widely adopted in industry. Therefore, we study in this paper whether test effectiveness can be approximated using a more light-weight approach. We hypothesize that a test case is more likely to detect faults in methods that are close to the test case on the call stack than in methods that the test case accesses indirectly through many other methods. Based on this hypothesis, we propose the minimal stack distance between test case and method as a new test measure, which expresses how close any test case comes to a given method, and study its correlation with test effectiveness. We conducted an empirical study with 21 open-source projects, which comprise in total 1.8 million LOC, and show that a correlation exists between stack distance and test effectiveness. The correlation reaches a strength up to 0.58. We further show that a classifier using the minimal stack distance along with additional easily computable measures can predict the mutation testing result of a method with 92.9% precision and 93.4% recall. Hence, such a classifier can be taken into consideration as a light-weight alternative to mutation testing or as a preceding, less costly step to that.Comment: EASE 201

    Fixation of genetic variation and optimization of gene expression: The speed of evolution in isolated lizard populations undergoing Reverse Island Syndrome

    Get PDF
    The ecological theory of island biogeography suggests that mainland populations should be more genetically divergent from those on large and distant islands rather than from those on small and close islets. Some island populations do not evolve in a linear way, but the process of divergence occurs more rapidly because they undergo a series of phenotypic changes, jointly known as the Island Syndrome. A special case is Reversed Island Syndrome (RIS), in which populations show drastic phenotypic changes both in body shape, skin colouration, age of sexual maturity, aggressiveness, and food intake rates. The populations showing the RIS were observed on islets nearby mainland and recently raised, and for this they are useful models to study the occurrence of rapid evolutionary change. We investigated the timing and mode of evolution of lizard populations adapted through selection on small islets. For our analyses, we used an ad hoc model system of three populations: wild-type lizards from the mainland and insular lizards from a big island (Capri, Italy), both Podarcis siculus siculus not affected by the syndrome, and a lizard population from islet (Scopolo) undergoing the RIS (called P. s. coerulea because of their melanism). The split time of the big (Capri) and small (Scopolo) islands was determined using geological events, like sea-level rises. To infer molecular evolution, we compared five complete mitochondrial genomes for each population to reconstruct the phylogeography and estimate the divergence time between island and mainland lizards. We found a lower mitochondrial mutation rate in Scopolo lizards despite the phenotypic changes achieved in approximately 8,000 years. Furthermore, transcriptome analyses showed significant differential gene expression between islet and mainland lizard populations, suggesting the key role of plasticity in these unpredictable environments

    Phylodynamics of H5N1 Highly Pathogenic Avian Influenza in Europe, 2005-2010: Potential for Molecular Surveillance of New Outbreaks.

    Get PDF
    Previous Bayesian phylogeographic studies of H5N1 highly pathogenic avian influenza viruses (HPAIVs) explored the origin and spread of the epidemic from China into Russia, indicating that HPAIV circulated in Russia prior to its detection there in 2005. In this study, we extend this research to explore the evolution and spread of HPAIV within Europe during the 2005-2010 epidemic, using all available sequences of the hemagglutinin (HA) and neuraminidase (NA) gene regions that were collected in Europe and Russia during the outbreak. We use discrete-trait phylodynamic models within a Bayesian statistical framework to explore the evolution of HPAIV. Our results indicate that the genetic diversity and effective population size of HPAIV peaked between mid-2005 and early 2006, followed by drastic decline in 2007, which coincides with the end of the epidemic in Europe. Our results also suggest that domestic birds were the most likely source of the spread of the virus from Russia into Europe. Additionally, estimates of viral dispersal routes indicate that Russia, Romania, and Germany were key epicenters of these outbreaks. Our study quantifies the dynamics of a major European HPAIV pandemic and substantiates the ability of phylodynamic models to improve molecular surveillance of novel AIVs

    ALOJA: A benchmarking and predictive platform for big data performance analysis

    Get PDF
    The main goals of the ALOJA research project from BSC-MSR, are to explore and automate the characterization of cost-effectivenessof Big Data deployments. The development of the project over its first year, has resulted in a open source benchmarking platform, an online public repository of results with over 42,000 Hadoop job runs, and web-based analytic tools to gather insights about system's cost-performance1. This article describes the evolution of the project's focus and research lines from over a year of continuously benchmarking Hadoop under dif- ferent configuration and deployments options, presents results, and dis cusses the motivation both technical and market-based of such changes. During this time, ALOJA's target has evolved from a previous low-level profiling of Hadoop runtime, passing through extensive benchmarking and evaluation of a large body of results via aggregation, to currently leveraging Predictive Analytics (PA) techniques. Modeling benchmark executions allow us to estimate the results of new or untested configu- rations or hardware set-ups automatically, by learning techniques from past observations saving in benchmarking time and costs.This work is partially supported the BSC-Microsoft Research Centre, the Span- ish Ministry of Education (TIN2012-34557), the MINECO Severo Ochoa Research program (SEV-2011-0067) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft
    • …
    corecore