12,983 research outputs found

    An investigation of pulsar searching techniques with the Fast Folding Algorithm

    Full text link
    Here we present an in-depth study of the behaviour of the Fast Folding Algorithm, an alternative pulsar searching technique to the Fast Fourier Transform. Weaknesses in the Fast Fourier Transform, including a susceptibility to red noise, leave it insensitive to pulsars with long rotational periods (P > 1 s). This sensitivity gap has the potential to bias our understanding of the period distribution of the pulsar population. The Fast Folding Algorithm, a time-domain based pulsar searching technique, has the potential to overcome some of these biases. Modern distributed-computing frameworks now allow for the application of this algorithm to all-sky blind pulsar surveys for the first time. However, many aspects of the behaviour of this search technique remain poorly understood, including its responsiveness to variations in pulse shape and the presence of red noise. Using a custom CPU-based implementation of the Fast Folding Algorithm, ffancy, we have conducted an in-depth study into the behaviour of the Fast Folding Algorithm in both an ideal, white noise regime as well as a trial on observational data from the HTRU-S Low Latitude pulsar survey, including a comparison to the behaviour of the Fast Fourier Transform. We are able to both confirm and expand upon earlier studies that demonstrate the ability of the Fast Folding Algorithm to outperform the Fast Fourier Transform under ideal white noise conditions, and demonstrate a significant improvement in sensitivity to long-period pulsars in real observational data through the use of the Fast Folding Algorithm.Comment: 19 pages, 15 figures, 3 table

    Ant colony optimization for object-oriented unit test generation

    Get PDF
    Generating useful unit tests for object-oriented programs is difficult for traditional optimization methods. One not only needs to identify values to be used as inputs, but also synthesize a program which creates the required state in the program under test. Many existing Automated Test Generation (ATG) approaches combine search with performance-enhancing heuristics. We present Tiered Ant Colony Optimization (Taco) for generating unit tests for object-oriented programs. The algorithm is formed of three Tiers of ACO, each of which tackles a distinct task: goal prioritization, test program synthesis, and data generation for the synthesised program. Test program synthesis allows the creation of complex objects, and exploration of program state, which is the breakthrough that has allowed the successful application of ACO to object-oriented test generation. Taco brings the mature search ecosystem of ACO to bear on ATG for complex object-oriented programs, providing a viable alternative to current approaches. To demonstrate the effectiveness of Taco, we have developed a proof-of-concept tool which successfully generated tests for an average of 54% of the methods in 170 Java classes, a result competitive with industry standard Randoop

    Getting ahead of the arms race: hothousing the coevolution of VirusTotal with a Packer

    Get PDF
    Malware detection is in a coevolutionary arms race where the attackers and defenders are constantly seeking advantage. This arms race is asymmetric: detection is harder and more expensive than evasion. White hats must be conservative to avoid false positives when searching for malicious behaviour. We seek to redress this imbalance. Most of the time, black hats need only make incremental changes to evade them. On occasion, white hats make a disruptive move and find a new technique that forces black hats to work harder. Examples include system calls, signatures and machine learning. We present a method, called Hothouse, that combines simulation and search to accelerate the white hat’s ability to counter the black hat’s incremental moves, thereby forcing black hats to perform disruptive moves more often. To realise Hothouse, we evolve EEE, an entropy-based polymorphic packer for Windows executables. Playing the role of a black hat, EEE uses evolutionary computation to disrupt the creation of malware signatures. We enter EEE into the detection arms race with VirusTotal, the most prominent cloud service for running anti-virus tools on software. During our 6 month study, we continually improved EEE in response to VirusTotal, eventually learning a packer that produces packed malware whose evasiveness goes from an initial 51.8% median to 19.6%. We report both how well VirusTotal learns to detect EEE-packed binaries and how well VirusTotal forgets in order to reduce false positives. VirusTotal’s tools learn and forget fast, actually in about 3 days. We also show where VirusTotal focuses its detection efforts, by analysing EEE’s variants

    The arms race: adversarial search defeats entropy used to detect malware

    Get PDF
    Malware creators have been getting their way for too long now. String-based similarity measures can leverage ground truth in a scalable way and can operate at a level of abstraction that is difficult to combat from the code level. At the string level, information theory and, specifically, entropy play an important role related to detecting patterns altered by concealment strategies, such as polymorphism or encryption. Controlling the entropy levels in different parts of a disk resident executable allows an analyst to detect malware or a black hat to evade the detection. This paper shows these two perspectives into two scalable entropy-based tools: EnTS and EEE. EnTS, the detection tool, shows the effectiveness of detecting entropy patterns, achieving 100% precision with 82% accuracy. It outperforms VirusTotal for accuracy on combined Kaggle and VirusShare malware. EEE, the evasion tool, shows the effectiveness of entropy as a concealment strategy, attacking binary-based state of the art detectors. It learns their detection patterns in up to 8 generations of its search process, and increments their false negative rate from range 0–9%, up to the range 90–98.7%

    Visualisation Tools for Multi-Perspective, Cross-Sector, Long-Term Infrastructure Performance Evaluation

    Get PDF
    Across different infrastructure sectors there are systems that help to monitor the current and near-future operation and performance of a particular system. Whilst Supervisory Control and Data Acquisition (SCADA) systems are critical to maintaining acceptable levels of functionality, they do not provide insights over the longer timescales across which strategic investment decisions play out. To understand how individual or multiple, interdependent, infrastructure sectors perform over longer timescales, capacity/demand modelling is required. However, the outputs of such models are often a complex high-dimensionality result-set, and this complexity is further compounded when crosssector evaluation is required. To maximise utility of such models, tools are required that can process and present key outputs. In this paper we describe the development of prototype tools for infrastructure performance evaluation in relation to different strategic decisions and the complex outputs generated from capacity and demand models of five infrastructure sectors (energy, water, waste water, solid waste, transport) investigated within the UK Infrastructure Transitions Research Consortium (ITRC). By constructing tools that expose various dimensions of the model outputs, a user is able to take greater control over the knowledge discovery process

    Are the distributions of Fast Radio Burst properties consistent with a cosmological population?

    Get PDF
    High time resolution radio surveys over the last few years have discovered a population of millisecond-duration transient bursts called Fast Radio Bursts (FRBs), which remain of unknown origin. FRBs exhibit dispersion consistent with propagation through a cold plasma and dispersion measures indicative of an origin at cosmological distances. In this paper we perform Monte Carlo simulations of a cosmological population of FRBs, based on assumptions consistent with observations of their energy distribution, their spatial density as a function of redshift and the properties of the interstellar and intergalactic media. We examine whether the dispersion measures, fluences, inferred redshifts, signal-to-noises and effective widths of known FRBs are consistent with a cosmological population. Statistical analyses indicate that at least 50 events at Parkes are required to distinguish between a constant co-moving FRB density, and a FRB density that evolves with redshift like the cosmological star formation rate density.Comment: 11 pages, 7 figures, 3 table

    Risk factors for failure of outpatient parenteral antibiotic therapy (OPAT) in infective endocarditis

    Get PDF
    Objectives: To identify risk factors for failure of outpatient antibiotic therapy (OPAT) in infective endocarditis (IE). Patients and methods: We identified IE cases managed at a single centre over 12 years from a prospectively maintained database. ‘OPAT failure’ was defined as unplanned readmission or antibiotic switch due to adverse drug reaction or antibiotic resistance. We analysed patient and disease-related risk factors for OPAT failure by univariate and multivariate logistic regression. We also retrospectively collected follow-up data on adverse disease outcome (defined as IE-related death or relapse) and performed Kaplan–Meier survival analysis up to 36 months following OPAT. Results: We identified 80 episodes of OPAT in IE. Failure occurred in 25/80 episodes (31.3%). On multivariate analysis, cardiac or renal failure [pooled OR 7.39 (95% CI 1.84–29.66), P = 0.005] and teicoplanin therapy [OR 8.69 (95% CI 2.01–37.47), P = 0.004] were independently associated with increased OPAT failure. OPAT failure with teicoplanin occurred despite therapeutic plasma levels. OPAT failure predicted adverse disease outcome up to 36 months (P = 0.016 log-rank test). Conclusions: These data caution against selecting patients with endocarditis for OPAT in the presence of cardiac or renal failure and suggest teicoplanin therapy may be associated with suboptimal OPAT outcomes. Alternative regimens to teicoplanin in the OPAT setting should be further investigated

    Fermion Doubling and a Natural Solution of the Strong CP Problem

    Full text link
    We suggest the fermion doubling for all quarks and leptons. It is a generalization of the neutrino doubling of the seesaw mechanism. The new quarks and leptons are SU(2)SU(2) singlets and carry the electromagnetic charges of their lighter counterparts. An SU(3)SU(3) {\it anomaly free global symmetry} or a discrete symmetry can be introduced to restrict the Yukawa couplings. The form of mass matrix is belonging to that of Nelson and Barr even though our model does not belong to Barr's criterion. The weak CP violation of the Kobayashi-Maskawa form is obtained through the spontaneous breaking of CP symmetry at high energy scale. The strong CP solution is through a specific form of the mass matrix. At low energy, the particle content is the same as in the standard model. For a model with a global symmetry, in addition there exists a massless majoron.Comment: SNUTP 93-68, 19 pages 1 TeX figure, ReVTeX 3.
    • …
    corecore