650,273 research outputs found

    Design and analysis of target-sensitive real-time systems

    Get PDF
    A significant number of real-time control applications include computational activities where the results have to be delivered at precise instants, rather than within a deadline. The performance of such systems significantly degrades if outputs are generated before or after the desired target time. This work presents a general methodology that can be used to design and analyze target-sensitive applications in which the timing parameters of the computational activities are tightly coupled with the physical characteristics of the system to be controlled. For the sake of clarity, the proposed methodology is illustrated through a sample case study used to show how to derive and verify real-time constraints from the mission requirements. Software implementation issues necessary to map the computational activities into tasks running on a real-time kernel are also discussed to identify the kernel mechanisms necessary to enforce timing constraints and analyze the feasibility of the application. A set of experiments are finally presented with the purpose of validating the proposed methodology

    Long Term Predictive Modeling on Big Spatio-Temporal Data

    Get PDF
    In the era of massive data, one of the most promising research fields involves the analysis of large-scale Spatio-temporal databases to discover exciting and previously unknown but potentially useful patterns from data collected over time and space. A modeling process in this domain must take temporal and spatial correlations into account, but with the dimensionality of the time and space measurements increasing, the number of elements potentially contributing to a target sharply grows, making the target\u27s long-term behavior highly complex, chaotic, highly dynamic, and hard to predict. Therefore, two different considerations are taken into account in this work: one is about how to identify the most relevant and meaningful features from the original Spatio-temporal feature space; the other is about how to model complex space-time dynamics with sensitive dependence on initial and boundary conditions. First, identifying strongly related features and removing the irrelevant or less important features with respect to a target feature from large-scale Spatio-temporal data sets is a critical and challenging issue in many fields, including the evolutionary history of crime hot spots, uncovering weather patterns, predicting floodings, earthquakes, and hurricanes, and determining global warming trends. The optimal sub-feature-set that contains all the valuable information is called the Markov Boundary. Unfortunately, the existing feature selection methods often focus on identifying a single Markov Boundary when real-world data could have many feature subsets that are equally good boundaries. In our work, we design a new multiple-Markov-boundary-based predictive model, Galaxy, to identify the precursors to heavy precipitation event clusters and predict heavy rainfall with a long lead time. We applied Galaxy to an extremely high-dimensional meteorological data set and finally determined 15 Markov boundaries related to heavy rainfall events in the Des Moines River Basin in Iowa. Our model identified the cold surges along the coast of Asia as an essential precursor to the surface weather over the United States, a finding which was later corroborated by climate experts. Second, chaotic behavior exists in many nonlinear Spatio-temporal systems, such as climate dynamics, weather prediction, and the space-time dynamics of virus spread. A reliable solution for these systems must handle their complex space-time dynamics and sensitive dependence on initial and boundary conditions. Deep neural networks\u27 hierarchical feature learning capabilities in both spatial and temporal domains are helpful for nonlinear Spatio-temporal dynamics modeling. However, sensitive dependence on initial and boundary conditions is still challenging for theoretical research and many critical applications. This study proposes a new recurrent architecture, error trajectory tracing, and accompanying training regime, Horizon Forcing, for prediction in chaotic systems. These methods have been validated on real-world Spatio-temporal data sets, including one meteorological dataset, three classics, chaotic systems, and four real-world time series prediction tasks with chaotic characteristics. Experiments\u27 results show that each proposed model could outperform the performance of current baseline approaches

    Development of optical microchip sensor for biomolecule detection

    Get PDF
    Optical sensors play vital roles in many applications in today’s world. Photonic technologies used to design and engineer optical sensing platforms can provide distinctive advantages over conventional detection techniques. For instance, when compared to electronic and magnetic sensing systems, optical sensors require physically smaller equipment and have the capability for delivering more analytical information (e.g. spectroscopic signatures). In addition, demand for low-cost and portable bio-analyte detections is a growing area for applications in healthcare and environmental fields. Among other factors to achieve reliable results in terms of selectivity and sensitivity is key for the detection of bio-analytes with analytical relevance. Commonly used bio-analytical techniques (e. g. high performance liquid chromatography) have been appropriately designed based on qualitative and quantitative analysis. However, the requirement of expensive equipment, and complexity of procedures (e.g. biomolecule labelling, calibrations, etc.) restrict the board applicability and growth of these techniques in the field of biosensing. Optical sensors tackle these problems because they enable selective and sensitive detection of analytes of interest with label-free, real-time, and cost-effective processes. Among them, optical interferometry is increasingly popular due label-free detection, simple optical platforms and low-cost design. An ideal substrate with high surface area as well as biological/chemical stability against degradation can enable the development of advanced analytical tools with broad applicability. Nanoporous anodic alumina has been recently envisaged as a powerful platform to develop label-free optical sensors in combination with different optical techniques. This thesis presents a high sensitive label-free biosensor design combining nanoporous anodic alumina (NAA) photonic structures and reflectometric interference spectroscopy (RIfS) for biomedical, food and agricultural applications. NAA is a suitable optical sensing platform due to its optical properties; a high surface area; its straightforward, scalable, and cost-competitive fabrication process, and its chemical and mechanical stability towards biological environments. Our biosensor enables real-time screening of any absorption and desorption event occurring inside the NAA pores. A proper selection of bio-analytes were able to be detected using this platform which offers unique feature in terms of simplicity and accuracy. The most relevant components of this thesis are categorised as below: 1. Self-ordered NAA fabrication and detection of an enzymatic analyte as a biomarker for cancer diagnosis: Fabrication of NAA photonic films using two step electrochemical anodization and chemical functionalisation. Detection of trace levels of analyte enzyme and its quantification by selective digestion. The NAA photonic film with the enzyme acts as a promising combination for a real-time point-of-care monitoring system for early stages of disease. 2. NAA rugate filters used to establish the binding affinity between blood proteins and drugs: Design, fabrication, and optimisation of NAA anodization parameters using sinusoidal pulse anodization approach (i.e. anodization offset and anodization period) to produce rugate filter photonic crystals that provide two comparative sensing parameters. Establishment of highly sensitive and selective device capable for drug binding assessments linked to treating a wide range of medical conditions. 3. NAA bilayers and food bioactive compound detection: Design, fabrication, and optimisation of NAA anodization parameters (i.e. anodization time and number of anodization steps) to obtain NAA bilayered photonic structures that display the effective response of NAA geometry with different types of nano-pore engineering. The photonic properties of the NAA bilayer were studied at each layer of nano-structure under specific binding of human serum albumin and quercetin as target agent. 4. Single nucleotide polymorphism (SNP) detection: The design and implementation of a Ligation-Rolling Circle Amplification assay to detect a single nucleotide polymorphism associated with insecticide resistance in a pest beetle species, Tribolium castaneum. This proof-of-concept SNP detection assay has the potential to provide a method compatible with a biosensor platform such as NAA. This demonstrates the first step towards the potential development of a genotyping biosensor, and a real-world application of insect insecticide resistance monitoring. The results presented in this thesis are expected to enable innovative developments on NAA sensing technology that could result in highly sensitive and selective detection systems for a broad range of bio-analytes detections.Thesis (Ph.D.) (Research by Publication) -- University of Adelaide, School of Chemical Engineering, 201

    Characterization of a photoluminescence-based fiber optic sensor system

    Get PDF
    2011 Fall.Includes bibliographical references.Measuring multiple analyte concentrations is essential for a wide range of environmental applications, which are important for the pursuit of public safety and health. Target analytes are often toxic chemical compounds found in groundwater or soil. However, in-situ measurement of such analytes still faces various challenges. Some of these challenges are rapid response for near-real time monitoring, simultaneous measurements of multiple analytes in a complex target environment, and high sensitivity for low analyte concentration without sample pretreatment. This thesis presents a low-cost, robust, multichannel fiber optic photoluminescence (PL)-based sensor system using a time-division multiplexing architecture for multiplex biosensor arrays for in-situ measurements in environmental applications. The system was designed based upon an indirect sensing scheme with a pH or oxygen sensitive dye molecules working as the transducer that is easily adaptable with various enzymes for detecting different analytes. A characterization of the multi-channel fiber optic PL-based sensor system was carried out in this thesis. Experiments were designed with interests in investigating this system's performance with only the transducer thus providing reference figures of merit, such as sensitivity and limit of detection, for further experiments or applications with the addition of various biosensors. A pH sensitive dye, fluoresceinamine (FLA), used as the transducer is immobilized in a poly vinyl alcohol (PVA) matrix for the characterization. The system exhibits a sensitivity of 8.66×10 5 M -1 as the Stern-Volmer constant, K SV , in H + concentration measurement range of 0.002 - 891 μM (pH of 3.05 - 8.69). A mathematical model is introduced to describe the Stern-Volmer equation's non-idealities, which are fluorophore fractional accessibility and the back reflection. Channel-to-channel uniformity is characterized with the modified Stern-Volmer model. Combining the FLA with appropriate enzymatic biosensors, the system is capable of 1,2-dichloroethane (DCA) and ethylene dibromide (EDB) detection. The calculated limit of detection (LOD) of the system can be as low as 0.08 μg/L for DCA and 0.14 μg/L for EDB. The performances of fused fiber coupler and bifurcated fiber assembly were investigated for the application in the fiber optic PL-based sensor systems in this thesis. Complex tradeoffs among back reflection noise, coupling efficiency and split ratio were analyzed with theoretical and experimental data. A series of experiments and simulations were carried out to compare the two types of fiber assemblies in the PL-based sensor systems in terms of excess loss, split ratio, back reflection, and coupling efficiency. A noise source analysis of three existing PL-intensity-based fiber optic enzymatic biosensor systems is provided to reveal the power distribution of different noise components. The three systems are a single channel system with a spectrometer as the detection device, a lab-developed multi-channel system, and a commercial prototype multi-channel system both using a photomultiplier tube (PMT) as the detection device. The thesis discusses the design differences of all three systems and some of the circuit design alteration attempts for performance improvements

    Real-time high-performance computing for embedded control systems

    Get PDF
    The real-time control systems industry is moving towards the consolidation of multiple computing systems into fewer and more powerful ones, aiming for a reduction in size, weight, and power. The increasing demand for higher performance in other critical domains like autonomous driving has led the industry to recently include embedded GPUs for the implementation of advanced functionalities. The highly parallel architecture of GPUs could also be leveraged in the control systems industry to develop more advanced, energy-efficient, and scalable control systems. However, the closed-source and non-deterministic nature of GPUs complicates the resource provisioning analysis required for the implementation of critical real-time systems. On the other hand, there is no indication of the integration of GPUs in the traditional development cycle of control systems, which is oriented to the use of a model-based design approach. Recently, some model-based design tools vendors have extended their development frameworks with GPU code generation capabilities targeting hybrid computing platforms, so that the model-based design environment now enables the concurrent analysis of more complex and diverse functions by simulation and automating the deployment to the final target. However, there is no indication whether these tools are well-suited for the design and development of time-sensitive systems. Motivated by these challenges, in this thesis, we contribute to the state of the art of real-time control systems towards the adoption of embedded GPUs by providing tools to facilitate the resource provisioning analysis and the integration in the model-based design development cycle. First, we present a methodology and an automated tool to extract the properties of GPU memory allocators. This tool allows the computation of the real amount of memory used by GPU applications, facilitating a correct resource provisioning analysis. Then, we present a library which allows the characterization of the use of dynamic memory in GPU applications. We use this library to characterize GPU benchmarks and we identify memory allocation patterns that could be modified to improve performance and memory consumption when targeting embedded GPUs. Based on these results, we present a tool to optimize the use of dynamic memory in legacy GPU applications executed on embedded platforms. This tool allows us to minimize the memory consumption and memory management overhead of GPU applications without rewriting them. Afterwards, we analyze the timing of control algorithms executed in embedded GPUs and we identify techniques to achieve an acceptable real-time behavior. Finally, we evaluate model-based design tools in terms of integration with GPU hardware and GPU code generation, and we propose improvements for the model-based generated GPU code. Then, we present a source-to-source transformation tool to automatically apply the proposed improvements.La industria de los sistemas de control en tiempo real avanza hacia la consolidación de múltiples sistemas informáticos en menos y más potentes sistemas, con el objetivo de reducir el tamaño, el peso y el consumo. La creciente demanda de un mayor rendimiento en otros dominios críticos, como la conducción autónoma, ha llevado a la industria a incluir recientemente GPU embebidas para la implementación de funcionalidades avanzadas. La arquitectura altamente paralela de las GPU también podría aprovecharse en la industria de los sistemas de control para desarrollar sistemas de control más avanzados, eficientes energéticamente y escalables. Sin embargo, la naturaleza privativa y no determinista de las GPUs complica el análisis de aprovisionamiento de recursos requerido para la implementación de sistemas críticos en tiempo real. Por otro lado, no hay indicios de la integración de las GPU en el ciclo de desarrollo tradicional de los sistemas de control, que está orientado al uso de un enfoque de diseño basado en modelos. Recientemente, algunos proveedores de herramientas de diseño basado en modelos han ampliado sus entornos de desarrollo con capacidades de generación de código de GPU dirigidas a plataformas informáticas híbridas, de modo que el entorno de diseño basado en modelos ahora permite el análisis simultáneo de funciones más complejas y diversas mediante la simulación y la automatización de la implementación para el objetivo final. Sin embargo, no hay indicación de si estas herramientas son adecuadas para el diseño y desarrollo de sistemas sensibles al tiempo. Motivados por estos desafíos, en esta tesis contribuimos al estado del arte de los sistemas de control en tiempo real hacia la adopción de GPUs integradas al proporcionar herramientas para facilitar el análisis de aprovisionamiento de recursos y la integración en el ciclo de desarrollo de diseño basado en modelos. Primero, presentamos una metodología y una herramienta automatizada para extraer las propiedades de los asignadores de memoria en GPUs. Esta herramienta permite el cómputo de la cantidad real de memoria utilizada por las aplicaciones GPU, facilitando un correcto análisis del aprovisionamiento de recursos. Luego, presentamos una librería que permite la caracterización del uso de memoria dinámica en aplicaciones de GPU. Usamos esta librería para caracterizar una serie de benchmarks GPU e identificamos patrones de asignación de memoria que podrían modificarse para mejorar el rendimiento y el consumo de memoria al utilizar GPUs embebidas. Con base en estos resultados, presentamos también una herramienta para optimizar el uso de la memoria dinámica en aplicaciones de GPU heredadas al ser ejecutadas en plataformas embebidas. Esta herramienta nos permite minimizar el consumo de memoria y la sobrecarga de administración de memoria de las aplicaciones GPU sin necesidad de reescribirlas. Posteriormente, analizamos el tiempo de los algoritmos de control ejecutados en GPUs embebidas e identificamos técnicas para lograr un comportamiento de tiempo real aceptable. Finalmente, evaluamos las herramientas de diseño basadas en modelos en términos de integración con hardware GPU y generación de código GPU, y proponemos mejoras para el código GPU generado por las herramientas basadas en modelos. Luego, presentamos una herramienta de transformación de código fuente para aplicar automáticamente al código generado las mejoras propuestas.Postprint (published version

    Keeping Context In Mind: Automating Mobile App Access Control with User Interface Inspection

    Full text link
    Recent studies observe that app foreground is the most striking component that influences the access control decisions in mobile platform, as users tend to deny permission requests lacking visible evidence. However, none of the existing permission models provides a systematic approach that can automatically answer the question: Is the resource access indicated by app foreground? In this work, we present the design, implementation, and evaluation of COSMOS, a context-aware mediation system that bridges the semantic gap between foreground interaction and background access, in order to protect system integrity and user privacy. Specifically, COSMOS learns from a large set of apps with similar functionalities and user interfaces to construct generic models that detect the outliers at runtime. It can be further customized to satisfy specific user privacy preference by continuously evolving with user decisions. Experiments show that COSMOS achieves both high precision and high recall in detecting malicious requests. We also demonstrate the effectiveness of COSMOS in capturing specific user preferences using the decisions collected from 24 users and illustrate that COSMOS can be easily deployed on smartphones as a real-time guard with a very low performance overhead.Comment: Accepted for publication in IEEE INFOCOM'201

    CSI Neural Network: Using Side-channels to Recover Your Artificial Neural Network Information

    Get PDF
    Machine learning has become mainstream across industries. Numerous examples proved the validity of it for security applications. In this work, we investigate how to reverse engineer a neural network by using only power side-channel information. To this end, we consider a multilayer perceptron as the machine learning architecture of choice and assume a non-invasive and eavesdropping attacker capable of measuring only passive side-channel leakages like power consumption, electromagnetic radiation, and reaction time. We conduct all experiments on real data and common neural net architectures in order to properly assess the applicability and extendability of those attacks. Practical results are shown on an ARM CORTEX-M3 microcontroller. Our experiments show that the side-channel attacker is capable of obtaining the following information: the activation functions used in the architecture, the number of layers and neurons in the layers, the number of output classes, and weights in the neural network. Thus, the attacker can effectively reverse engineer the network using side-channel information. Next, we show that once the attacker has the knowledge about the neural network architecture, he/she could also recover the inputs to the network with only a single-shot measurement. Finally, we discuss several mitigations one could use to thwart such attacks.Comment: 15 pages, 16 figure

    Analysis of LIGO data for gravitational waves from binary neutron stars

    Get PDF
    We report on a search for gravitational waves from coalescing compact binary systems in the Milky Way and the Magellanic Clouds. The analysis uses data taken by two of the three LIGO interferometers during the first LIGO science run and illustrates a method of setting upper limits on inspiral event rates using interferometer data. The analysis pipeline is described with particular attention to data selection and coincidence between the two interferometers. We establish an observational upper limit of R<\mathcal{R}<1.7 \times 10^{2}peryearperMilkyWayEquivalentGalaxy(MWEG),with90coalescencerateofbinarysystemsinwhicheachcomponenthasamassintherange1−−3 per year per Milky Way Equivalent Galaxy (MWEG), with 90% confidence, on the coalescence rate of binary systems in which each component has a mass in the range 1--3 M_\odot$.Comment: 17 pages, 9 figure

    The potential of programmable logic in the middle: cache bleaching

    Full text link
    Consolidating hard real-time systems onto modern multi-core Systems-on-Chip (SoC) is an open challenge. The extensive sharing of hardware resources at the memory hierarchy raises important unpredictability concerns. The problem is exacerbated as more computationally demanding workload is expected to be handled with real-time guarantees in next-generation Cyber-Physical Systems (CPS). A large body of works has approached the problem by proposing novel hardware re-designs, and by proposing software-only solutions to mitigate performance interference. Strong from the observation that unpredictability arises from a lack of fine-grained control over the behavior of shared hardware components, we outline a promising new resource management approach. We demonstrate that it is possible to introduce Programmable Logic In-the-Middle (PLIM) between a traditional multi-core processor and main memory. This provides the unique capability of manipulating individual memory transactions. We propose a proof-of-concept system implementation of PLIM modules on a commercial multi-core SoC. The PLIM approach is then leveraged to solve long-standing issues with cache coloring. Thanks to PLIM, colored sparse addresses can be re-compacted in main memory. This is the base principle behind the technique we call Cache Bleaching. We evaluate our design on real applications and propose hypervisor-level adaptations to showcase the potential of the PLIM approach.Accepted manuscrip
    • …
    corecore