815 research outputs found

    Surface Phonons and Other Localized Excitations

    Full text link
    The diatomic linear chain of masses coupled by harmonic springs is a textboook model for vibrational normal modes (phonons) in crystals. In addition to propagating acoustic and optic branches, this model is known to support a ``gap mode'' localized at the surface, provided the atom at the surface has light rather than heavy mass. An elementary argument is given which explains this mode and provides values for the frequency and localization length. By reinterpreting this mode in different ways, we obtain the frequency and localization lengths for three other interesting modes: (1) the surface vibrational mode of a light mass impurity at the surface of a monatomic chain; (2) the localized vibrational mode of a stacking fault in a diatomic chain; and (3) the localized vibrational mode of a light mass impurity in a monatomic chain.Comment: 5 pages with 4 embedded postscript figures. This paper will appear in the American Journal of Physic

    Dynamic Analysis of Executables to Detect and Characterize Malware

    Full text link
    It is needed to ensure the integrity of systems that process sensitive information and control many aspects of everyday life. We examine the use of machine learning algorithms to detect malware using the system calls generated by executables-alleviating attempts at obfuscation as the behavior is monitored rather than the bytes of an executable. We examine several machine learning techniques for detecting malware including random forests, deep learning techniques, and liquid state machines. The experiments examine the effects of concept drift on each algorithm to understand how well the algorithms generalize to novel malware samples by testing them on data that was collected after the training data. The results suggest that each of the examined machine learning algorithms is a viable solution to detect malware-achieving between 90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the performance evaluation on an operational network may not match the performance achieved in training. Namely, the CAA may be about the same, but the values for precision and recall over the malware can change significantly. We structure experiments to highlight these caveats and offer insights into expected performance in operational environments. In addition, we use the induced models to gain a better understanding about what differentiates the malware samples from the goodware, which can further be used as a forensics tool to understand what the malware (or goodware) was doing to provide directions for investigation and remediation.Comment: 9 pages, 6 Tables, 4 Figure

    Self-Updating Models with Error Remediation

    Full text link
    Many environments currently employ machine learning models for data processing and analytics that were built using a limited number of training data points. Once deployed, the models are exposed to significant amounts of previously-unseen data, not all of which is representative of the original, limited training data. However, updating these deployed models can be difficult due to logistical, bandwidth, time, hardware, and/or data sensitivity constraints. We propose a framework, Self-Updating Models with Error Remediation (SUMER), in which a deployed model updates itself as new data becomes available. SUMER uses techniques from semi-supervised learning and noise remediation to iteratively retrain a deployed model using intelligently-chosen predictions from the model as the labels for new training iterations. A key component of SUMER is the notion of error remediation as self-labeled data can be susceptible to the propagation of errors. We investigate the use of SUMER across various data sets and iterations. We find that self-updating models (SUMs) generally perform better than models that do not attempt to self-update when presented with additional previously-unseen data. This performance gap is accentuated in cases where there is only limited amounts of initial training data. We also find that the performance of SUMER is generally better than the performance of SUMs, demonstrating a benefit in applying error remediation. Consequently, SUMER can autonomously enhance the operational capabilities of existing data processing systems by intelligently updating models in dynamic environments.Comment: 17 pages, 13 figures, published in the proceedings of the Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II conference in the SPIE Defense + Commercial Sensing, 2020 symposiu

    Tracking Cyber Adversaries with Adaptive Indicators of Compromise

    Full text link
    A forensics investigation after a breach often uncovers network and host indicators of compromise (IOCs) that can be deployed to sensors to allow early detection of the adversary in the future. Over time, the adversary will change tactics, techniques, and procedures (TTPs), which will also change the data generated. If the IOCs are not kept up-to-date with the adversary's new TTPs, the adversary will no longer be detected once all of the IOCs become invalid. Tracking the Known (TTK) is the problem of keeping IOCs, in this case regular expressions (regexes), up-to-date with a dynamic adversary. Our framework solves the TTK problem in an automated, cyclic fashion to bracket a previously discovered adversary. This tracking is accomplished through a data-driven approach of self-adapting a given model based on its own detection capabilities. In our initial experiments, we found that the true positive rate (TPR) of the adaptive solution degrades much less significantly over time than the naive solution, suggesting that self-updating the model allows the continued detection of positives (i.e., adversaries). The cost for this performance is in the false positive rate (FPR), which increases over time for the adaptive solution, but remains constant for the naive solution. However, the difference in overall detection performance, as measured by the area under the curve (AUC), between the two methods is negligible. This result suggests that self-updating the model over time should be done in practice to continue to detect known, evolving adversaries.Comment: This was presented at the 4th Annual Conf. on Computational Science & Computational Intelligence (CSCI'17) held Dec 14-16, 2017 in Las Vegas, Nevada, US

    Voltage tuning of vibrational mode energies in single-molecule junctions

    Full text link
    Vibrational modes of molecules are fundamental properties determined by intramolecular bonding, atomic masses, and molecular geometry, and often serve as important channels for dissipation in nanoscale processes. Although single-molecule junctions have been employed to manipulate electronic structure and related functional properties of molecules, electrical control of vibrational mode energies has remained elusive. Here we use simultaneous transport and surface-enhanced Raman spectroscopy measurements to demonstrate large, reversible, voltage-driven shifts of vibrational mode energies of C60 molecules in gold junctions. C60 mode energies are found to vary approximately quadratically with bias, but in a manner inconsistent with a simple vibrational Stark effect. Our theoretical model suggests instead that the mode shifts are a signature of bias-driven addition of electronic charge to the molecule. These results imply that voltage-controlled tuning of vibrational modes is a general phenomenon at metal-molecule interfaces and is a means of achieving significant shifts in vibrational energies relative to a pure Stark effect.Comment: 23 pages, 4 figures + 12 pages, 7 figures supporting materia

    Dose, exposure time, and resolution in Serial X-ray Crystallography

    Full text link
    The resolution of X-ray diffraction microscopy is limited by the maximum dose that can be delivered prior to sample damage. In the proposed Serial Crystallography method, the damage problem is addressed by distributing the total dose over many identical hydrated macromolecules running continuously in a single-file train across a continuous X-ray beam, and resolution is then limited only by the available molecular and X-ray fluxes and molecular alignment. Orientation of the diffracting molecules is achieved by laser alignment. We evaluate the incident X-ray fluence (energy/area) required to obtain a given resolution from (1) an analytical model, giving the count rate at the maximum scattering angle for a model protein, (2) explicit simulation of diffraction patterns for a GroEL-GroES protein complex, and (3) the frequency cut off of the transfer function following iterative solution of the phase problem, and reconstruction of an electron density map in the projection approximation. These calculations include counting shot noise and multiple starts of the phasing algorithm. The results indicate counting time and the number of proteins needed within the beam at any instant for a given resolution and X-ray flux. We confirm an inverse fourth power dependence of exposure time on resolution, with important implications for all coherent X-ray imaging. We find that multiple single-file protein beams will be needed for sub-nanometer resolution on current third generation synchrotrons, but not on fourth generation designs, where reconstruction of secondary protein structure at a resolution of 0.7 nm should be possible with short exposures.Comment: 19 pages, 7 figures, 1 tabl

    Written education materials for stroke patients and their carers: perspectives and practices of health professionals

    Get PDF
    Inadequacies in the provision of written education materials to stroke patients and their carers have been reported. In this study, 20 stroke team health professionals were surveyed regarding their use of and perspectives on written education materials. Seventy percent of participants provided materials to 25% or fewer stroke patients and 90% believed that patients and carers are only occasionally or rarely provided with sufficient written information. Health professionals were uncertain which team members provided written information and identified the need to improve the quality of materials used. Stroke teams should implement a system that facilitates the routine provision of quality written materials to patients and carers, communication among team members, and documentation and verbal reinforcement of the information provided
    corecore