267 research outputs found

    Review of selection criteria for sensor and actuator configurations suitable for internal combustion engines

    Get PDF
    This literature review considers the problem of finding a suitable configuration of sensors and actuators for the control of an internal combustion engine. It takes a look at the methods, algorithms, processes, metrics, applications, research groups and patents relevant for this topic. Several formal metric have been proposed, but practical use remains limited. Maximal information criteria are theoretically optimal for selecting sensors, but hard to apply to a system as complex and nonlinear as an engine. Thus, we reviewed methods applied to neighboring fields including nonlinear systems and non-minimal phase systems. Furthermore, the closed loop nature of control means that information is not the only consideration, and speed, stability and robustness have to be considered. The optimal use of sensor information also requires the use of models, observers, state estimators or virtual sensors, and practical acceptance of these remains limited. Simple control metrics such as conditioning number are popular, mostly because they need fewer assumptions than closed-loop metrics, which require a full plant, disturbance and goal model. Overall, no clear consensus can be found on the choice of metrics to define optimal control configurations, with physical measures, linear algebra metrics and modern control metrics all being used. Genetic algorithms and multi-criterial optimisation were identified as the most widely used methods for optimal sensor selection, although addressing the dimensionality and complexity of formulating the problem remains a challenge. This review does present a number of different successful approaches for specific applications domains, some of which may be applicable to diesel engines and other automotive applications. For a thorough treatment, non-linear dynamics and uncertainties need to be considered together, which requires sophisticated (non-Gaussian) stochastic models to establish the value of a control architecture

    Machine learning for smart building applications: Review and taxonomy

    Get PDF
    © 2019 Association for Computing Machinery. The use of machine learning (ML) in smart building applications is reviewed in this article. We split existing solutions into two main classes: occupant-centric versus energy/devices-centric. The first class groups solutions that use ML for aspects related to the occupants, including (1) occupancy estimation and identification, (2) activity recognition, and (3) estimating preferences and behavior. The second class groups solutions that use ML to estimate aspects related either to energy or devices. They are divided into three categories: (1) energy profiling and demand estimation, (2) appliances profiling and fault detection, and (3) inference on sensors. Solutions in each category are presented, discussed, and compared; open perspectives and research trends are discussed as well. Compared to related state-of-the-art survey papers, the contribution herein is to provide a comprehensive and holistic review from the ML perspectives rather than architectural and technical aspects of existing building management systems. This is by considering all types of ML tools, buildings, and several categories of applications, and by structuring the taxonomy accordingly. The article ends with a summary discussion of the presented works, with focus on lessons learned, challenges, open and future directions of research in this field

    A Wireless Sensor Network Based Personnel Positioning Scheme in Coal Mines with Blind Areas

    Get PDF
    This paper proposes a novel personnel positioning scheme for a tunnel network with blind areas, which compared with most existing schemes offers both low-cost and high-precision. Based on the data models of tunnel networks, measurement networks and mobile miners, the global positioning method is divided into four steps: (1) calculate the real time personnel location in local areas using a location engine, and send it to the upper computer through the gateway; (2) correct any localization errors resulting from the underground tunnel environmental interference; (3) determine the global three-dimensional position by coordinate transformation; (4) estimate the personnel locations in the blind areas. A prototype system constructed to verify the positioning performance shows that the proposed positioning system has good reliability, scalability, and positioning performance. In particular, the static localization error of the positioning system is less than 2.4 m in the underground tunnel environment and the moving estimation error is below 4.5 m in the corridor environment. The system was operated continuously over three months without any failures

    Smart Wireless Sensor Networks

    Get PDF
    The recent development of communication and sensor technology results in the growth of a new attractive and challenging area - wireless sensor networks (WSNs). A wireless sensor network which consists of a large number of sensor nodes is deployed in environmental fields to serve various applications. Facilitated with the ability of wireless communication and intelligent computation, these nodes become smart sensors which do not only perceive ambient physical parameters but also be able to process information, cooperate with each other and self-organize into the network. These new features assist the sensor nodes as well as the network to operate more efficiently in terms of both data acquisition and energy consumption. Special purposes of the applications require design and operation of WSNs different from conventional networks such as the internet. The network design must take into account of the objectives of specific applications. The nature of deployed environment must be considered. The limited of sensor nodes� resources such as memory, computational ability, communication bandwidth and energy source are the challenges in network design. A smart wireless sensor network must be able to deal with these constraints as well as to guarantee the connectivity, coverage, reliability and security of network's operation for a maximized lifetime. This book discusses various aspects of designing such smart wireless sensor networks. Main topics includes: design methodologies, network protocols and algorithms, quality of service management, coverage optimization, time synchronization and security techniques for sensor networks

    Epilepsy seizure prediction on EEG using common spatial pattern and convolutional neural network

    Get PDF
    Epilepsy seizure prediction paves the way of timely warning for patients to take more active and effective intervention measures. Compared to seizure detection that only identifies the inter-ictal state and the ictal state, far fewer researches have been conducted on seizure prediction because the high similarity makes it challenging to distinguish between the pre-ictal state and the inter-ictal state. In this paper, a novel solution on seizure prediction is proposed using common spatial pattern (CSP) and convolutional neural network (CNN). Firstly, artificial pre-ictal EEG signals based on the original ones are generated by combining the segmented pre-ictal signals to solve the trial imbalance problem between the two states. Secondly, a feature extractor employing wavelet packet decomposition and CSP is designed to extract the distinguishing features in both the time domain and the frequency domain. It can improve overall accuracy while reducing the training time. Finally, a shallow CNN is applied to discriminate between the pre-ictal state and the inter-ictal state. Our proposed solution is evaluated on 23 patient's data from Boston Children's Hospital-MIT scalp EEG dataset by employing a leave-one-out cross-validation, and it achieves a sensitivity of 92.2% and false prediction rate of 0.12/h. Experimental result demonstrates that the proposed approach outperforms most state-of-the-art methods

    Spatio-temporal traffic anomaly detection for urban networks

    Get PDF
    Urban road networks are often affected by disruptions such as accidents and roadworks, giving rise to congestion and delays, which can, in turn, create a wide range of negative impacts to the economy, environment, safety and security. Accurate detection of the onset of traffic anomalies, specifically Recurrent Congestion (RC) and Nonrecurrent Congestion (NRC) in the traffic networks, is an important ITS function to facilitate proactive intervention measures to reduce the level of severity of congestion. A substantial body of literature is dedicated to models with varying levels of complexity that attempt to identify such anomalies. Given the complexity of the problem, however, very less effort is dedicated to the development of methods that attempt to detect traffic anomalies using spatio-temporal features. Driven both by the recent advances in deep learning techniques and the development of Traffic Incident Management Systems (TIMS), the aim of this research is to develop novel traffic anomaly detection models that can incorporate both spatial and temporal traffic information to detect traffic anomalies at a network level. This thesis first reviews the state of the art in traffic anomaly detection techniques, including the existing methods and emerging machine learning and deep learning methods, before identifying the gaps in the current understanding of traffic anomaly and its detection. One of the problems in terms of adapting the deep learning models to traffic anomaly detection is the translation of time series traffic data from multiple locations to the format necessary for the deep learning model to learn the spatial and temporal features effectively. To address this challenging problem and build a systematic traffic anomaly detection method at a network level, this thesis proposes a methodological framework consisting of (a) the translation layer (which is designed to translate the time series traffic data from multiple locations over the road network into a desired format with spatial and temporal features), (b) detection methods and (c) localisation. This methodological framework is subsequently tested for early RC detection and NRC detection. Three translation layers including connectivity matrix, geographical grid translation and spatial temporal translation are presented and evaluated for both RC and NRC detection. The early RC detection approach is a deep learning based method that combines Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). The NRC detection, on the other hand, involves only the application of the CNN. The performance of the proposed approach is compared against other conventional congestion detection methods, using a comprehensive evaluation framework that includes metrics such as detection rates and false positive rates, and the sensitivity analysis of time windows as well as prediction horizons. The conventional congestion detection methods used for the comparison include Multilayer Perceptron, Random Forest and Gradient Boost Classifier, all of which are commonly used in the literature. Real-world traffic data from the City of Bath are used for the comparative analysis of RC, while traffic data in conjunction with incident data extracted from Central London are used for NRC detection. The results show that while the connectivity matrix may be capable of extracting features of a small network, the increased sparsity in the matrix in a large network reduces its effectiveness in feature learning compared to geographical grid translation. The results also indicate that the proposed deep learning method demonstrates superior detection accuracy compared to alternative methods and that it can detect recurrent congestion as early as one hour ahead with acceptable accuracy. The proposed method is capable of being implemented within a real-world ITS system making use of traffic sensor data, thereby providing a practically useful tool for road network managers to manage traffic proactively. In addition, the results demonstrate that a deep learning-based approach may improve the accuracy of incident detection and locate traffic anomalies precisely, especially in a large urban network. Finally, the framework is further tested for robustness in terms of network topology, sensor faults and missing data. The robustness analysis demonstrates that the proposed traffic anomaly detection approaches are transferable to different sizes of road networks, and that they are robust in the presence of sensor faults and missing data.Open Acces

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications

    Energy aware performance evaluation of WSNs

    Get PDF
    Distributed sensor networks have been discussed for more than 30 years, but the vision of Wireless Sensor Networks (WSNs) has been brought into reality only by the rapid advancements in the areas of sensor design, information technologies, and wireless networks that have paved the way for the proliferation of WSNs. The unique characteristics of sensor networks introduce new challenges, amongst which prolonging the sensor lifetime is the most important. Energy-efficient solutions are required for each aspect of WSN design to deliver the potential advantages of the WSN phenomenon, hence in both existing and future solutions for WSNs, energy efficiency is a grand challenge. The main contribution of this thesis is to present an approach considering the collaborative nature of WSNs and its correlation characteristics, providing a tool which considers issues from physical to application layer together as entities to enable the framework which facilitates the performance evaluation of WSNs. The simulation approach considered provides a clear separation of concerns amongst software architecture of the applications, the hardware configuration and the WSN deployment unlike the existing tools for evaluation. The reuse of models across projects and organizations is also promoted while realistic WSN lifetime estimations and performance evaluations are possible in attempts of improving performance and maximizing the lifetime of the network. In this study, simulations are carried out with careful assumptions for various layers taking into account the real time characteristics of WSN. The sensitivity of WSN systems are mainly due to their fragile nature when energy consumption is considered. The case studies presented demonstrate the importance of various parameters considered in this study. Simulation-based studies are presented, taking into account the realistic settings from each layer of the protocol stack. Physical environment is considered as well. The performance of the layered protocol stack in realistic settings reveals several important interactions between different layers. These interactions are especially important for the design of WSNs in terms of maximizing the lifetime of the network

    Application of Hierarchical Temporal Memory to Anomaly Detection of Vital Signs for Ambient Assisted Living

    Get PDF
    This thesis presents the development of a framework for anomaly detection of vital signs for an Ambient Assisted Living (AAL) health monitoring scenario. It is driven by spatiotemporal reasoning of vital signs that Cortical Learning Algorithms (CLA) based on Hierarchal Temporal Memory (HTM) theory undertakes in an AAL health monitoring scenario to detect anomalous data points preceding cardiac arrest. This thesis begins with a literature review on the existing Ambient intelligence (AmI) paradigm, AAL technologies and anomaly detection algorithms used in a health monitoring scenario. The research revealed the significance of the temporal and spatial reasoning in the vital signs monitoring as the spatiotemporal patterns of vital signs provide a basis to detect irregularities in the health status of elderly people. The HTM theory is yet to be adequately deployed in an AAL health monitoring scenario. Hence HTM theory, network and core operations of the CLA are explored. Despite the fact that standard implementation of the HTM theory comprises of a single-level hierarchy, multiple vital signs, specifically the correlation between them is not sufficiently considered. This insufficiency is of particular significance considering that vital signs are correlated in time and space, which are used in the health monitoring applications for diagnosis and prognosis tasks. This research proposes a novel framework consisting of multi-level HTM networks. The lower level consists of four models allocated to the four vital signs, Systolic Blood Pressure (SBP), Diastolic Blood Pressure (DBP), Heart Rate (HR) and peripheral capillary oxygen saturation (SpO2) in order to learn the spatiotemporal patterns of each vital sign. Additionally, a higher level is introduced to learn spatiotemporal patterns of the anomalous data point detected from the four vital signs. The proposed hierarchical organisation improves the model’s performance by using the semantically improved representation of the sensed data because patterns learned at each level of the hierarchy are reused when combined in novel ways at higher levels. To investigate and evaluate the performance of the proposed framework, several data selection techniques are studied, and accordingly, a total record of 247 elderly patients is extracted from the MIMIC-III clinical database. The performance of the proposed framework is evaluated and compared against several state-of-the-art anomaly detection algorithms using both online and traditional metrics. The proposed framework achieved 83% NAB score which outperforms the HTM and k-NN algorithms by 15%, the HBOS and INFLO SVD by 16% and the k-NN PCA by 21% while the SVM scored 34%. The results prove that multiple HTM networks can achieve better performance when dealing with multi-dimensional data, i.e. data collected from more than one source/sensor

    High-Performance Modelling and Simulation for Big Data Applications

    Get PDF
    This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
    corecore