2,937 research outputs found

    Structure-Aware Reliability Analysis of Large-Scale Linear Sensor Systems

    Get PDF
    A linear sensor system is a system in which the sensor measurements have a linear relationship to the source variables that cannot be measured directly. Linear sensor systems are widely deployed in advanced manufacturing processes, wireless transportation systems, electrical grid systems, and oil and gas pipeline systems to monitor and control various physical phenomena critical to the smooth function of such systems. The source variables capture these complex physical phenomena which are then estimated based on the sensor measurements. Two of the critical parameters to be considered while modeling any linear sensor system are the degree of redundancy and reliability. The degree of redundancy is the minimum number of sensor failures that a system can withstand without compromising the identifiability of any source variables. The reliability of a sensor system is a probabilistic evaluation of the ability of a system to tolerate sensor failures. Unfortunately, the existing approaches to compute the degree of redundancy and estimate the reliability are limited in scope due to their inability to solve problems in large-scale. In this research, we establish a new knowledge base for computing the degree of redundancy and estimating the reliability of large-scale linear sensor systems. We first introduce a heuristic convex optimization algorithm that uses techniques from compressed sensing to find highly reliable approximate values for the degree of redundancy. Due to the distributed nature of linear sensor systems often deployed in practical applications, many of these systems embed certain structures. In our second approach, we study these structural properties in detail utilizing matroid theory concepts of connectivity and duality and propose decomposition theorems to disintegrate the redundancy degree problem into subproblems over smaller subsystems. We solve these subproblems using mixed integer programming to obtain the degree of redundancy of the overall system. We further extend these decomposition theorems to help with dividing the reliability evaluation problem into smaller subproblems. Finally, we estimate the reliability of the linear sensor system by solving these subproblems employing mixed integer programming embedded within a recursive variance reduction framework, a technique commonly used in network reliability literature. We implement and test developed algorithms using a wide range of standard test instances that simulate real-life applications of linear sensor systems. Our computational studies prove that the proposed algorithms are significantly faster than the existing ones. Moreover, the variance of our reliability estimate is significantly lower than the previous estimates

    Sensor Placement Algorithms for Process Efficiency Maximization

    Get PDF
    Even though the senor placement problem has been studied for process plants, it has been done for minimizing the number of sensors, minimizing the cost of the sensor network, maximizing the reliability, or minimizing the estimation errors. In the existing literature, no work has been reported on the development of a sensor network design (SND) algorithm for maximizing efficiency of the process. The SND problem for maximizing efficiency requires consideration of the closed-loop system, which is unlike the open-loop systems that have been considered in previous works. In addition, work on the SND problem for a large fossil energy plant such as an integrated gasification combined cycle (IGCC) power plant with CO2 capture is rare.;The objective of this research is to develop a SND algorithm for maximizing the plant performance using criteria such as efficiency in the case of an estimator-based control system. The developed algorithm will be particularly useful for sensor placement in IGCC plants at the grassroots level where the number, type, and location of sensors are yet to be identified. In addition, the same algorithm can be further enhanced for use in retrofits, where the objectives could be to upgrade (addition of more sensors) and relocate existing sensors to different locations. The algorithms are developed by considering the presence of an optimal Kalman Filter (KF) that is used to estimate the unmeasured and noisy measurements given the process model and a set of measured variables. The designed algorithms are able to determine the location and type of the sensors under constraints on budget and estimation accuracy. In this work, three SND algorithms are developed: (a) steady-state SND algorithm, (b) dynamic model-based SND algorithm, and (c) nonlinear model-based SND algorithm. These algorithms are implemented in an acid gas removal (AGR) unit as part of an IGCC power plant with CO2 capture. The AGR process involves extensive heat and mass integration and therefore, is very suitable for the study of the proposed algorithm in the presence of complex interactions between process variables

    Doctor of Philosophy

    Get PDF
    dissertationA safe and secure transportation system is critical to providing protection to those who employ it. Safety is being increasingly demanded within the transportation system and transportation facilities and services will need to adapt to change to provide it. This dissertation provides innovate methodologies to identify current shortcomings and provide theoretic frameworks for enhancing the safety and security of the transportation network. This dissertation is designed to provide multilevel enhanced safety and security within the transportation network by providing methodologies to identify, monitor, and control major hazards associated within the transportation network. The risks specifically addressed are: (1) enhancing nuclear materials sensor networks to better deter and interdict smugglers, (2) use game theory as an interdiction model to design better sensor networks and forensically track smugglers, (3) incorporate safety into regional transportation planning to provide decision-makers a basis for choosing safety design alternatives, and (4) use a simplified car-following model that can incorporate errors to predict situational-dependent safety effects of distracted driving in an ITS infrastructure to deploy live-saving countermeasures

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation aims to develop an innovative and improved paradigm for real-time large-scale traffic system estimation and mobility optimization. To fully utilize heterogeneous data sources in a complex spatial environment, this dissertation proposes an integrated and unified estimation-optimization framework capable of interpreting different types of traffic measurements into various decision-making processes. With a particular emphasis on the end-to-end travel time prediction problem, this dissertation proposes an information-theoretic sensor location model that aims to maximize information gains from a set of point, point-to-point and probe sensors in a traffic network. After thoroughly examining a number of possible measures of information gain, this dissertation selects a path travel time prediction uncertainty criterion to construct a joint sensor location and travel time estimation/prediction framework. To better measure the quality of service for ransportation systems, this dissertation investigates the path travel time reliability from two perspectives: variability and robustness. Based on calibrated travel disutility functions, the path travel time variability in this research is represented by its standard deviation in addition to the mean travel time. To handle the nonlinear and nonadditive cost functions introduced by the quadratic forms of the standard deviation term, a novel Lagrangian substitution approach is introduced to estimate the lower bound of the most reliable path solution through solving a sequence of standard shortest path problems. To recognize the asymmetrical and heavy-tailed travel time distributions, this dissertation proposes Lagrangian relaxation based iterative search algorithms for finding the absolute and percentile robust shortest paths. Moreover, this research develops a sampling-based method to dynamically construct a proxy objective function in terms of travel time observations from multiple days. Comprehensive numerical experiment results with real-world travel time measurements show that 10-20 iterations of standard shortest path algorithms for the reformulated models can offer a very small relative duality gap of about 2-6%, for both reliability measure models. This broadly-defined research has successfully addressed a number of theoretically challenging and practically important issues for building the next-generation Advanced Traveler Information Systems, and is expected to offer a rich foundation beneficial to the model and algorithmic development of sensor network design, traffic forecasting and personalized navigation

    MULTI-SCALE SCHEDULING TECHNIQUES FOR SIGNAL PROCESSING SYSTEMS

    Get PDF
    A variety of hardware platforms for signal processing has emerged, from distributed systems such as Wireless Sensor Networks (WSNs) to parallel systems such as Multicore Programmable Digital Signal Processors (PDSPs), Multicore General Purpose Processors (GPPs), and Graphics Processing Units (GPUs) to heterogeneous combinations of parallel and distributed devices. When a signal processing application is implemented on one of those platforms, the performance critically depends on the scheduling techniques, which in general allocate computation and communication resources for competing processing tasks in the application to optimize performance metrics such as power consumption, throughput, latency, and accuracy. Signal processing systems implemented on such platforms typically involve multiple levels of processing and communication hierarchy, such as network-level, chip-level, and processor-level in a structural context, and application-level, subsystem-level, component-level, and operation- or instruction-level in a behavioral context. In this thesis, we target scheduling issues that carefully address and integrate scheduling considerations at different levels of these structural and behavioral hierarchies. The core contributions of the thesis include the following. Considering both the network-level and chip-level, we have proposed an adaptive scheduling algorithm for wireless sensor networks (WSNs) designed for event detection. Our algorithm exploits discrepancies among the detection accuracy of individual sensors, which are derived from a collaborative training process, to allow each sensor to operate in a more energy efficient manner while the network satisfies given constraints on overall detection accuracy. Considering the chip-level and processor-level, we incorporated both temperature and process variations to develop new scheduling methods for throughput maximization on multicore processors. In particular, we studied how to process a large number of threads with high speed and without violating a given maximum temperature constraint. We targeted our methods to multicore processors in which the cores may operate at different frequencies and different levels of leakage. We develop speed selection and thread assignment schedulers based on the notion of a core's steady state temperature. Considering the application-level, component-level and operation-level, we developed a new dataflow based design flow within the targeted dataflow interchange format (TDIF) design tool. Our new multiprocessor system-on-chip (MPSoC)-oriented design flow, called TDIF-PPG, is geared towards analysis and mapping of embedded DSP applications on MPSoCs. An important feature of TDIF-PPG is its capability to integrate graph level parallelism and actor level parallelism into the application mapping process. Here, graph level parallelism is exposed by the dataflow graph application representation in TDIF, and actor level parallelism is modeled by a novel model for multiprocessor dataflow graph implementation that we call the Parallel Processing Group (PPG) model. Building on the contribution above, we formulated a new type of parallel task scheduling problem called Parallel Actor Scheduling (PAS) for chip-level MPSoC mapping of DSP systems that are represented as synchronous dataflow (SDF) graphs. In contrast to traditional SDF-based scheduling techniques, which focus on exploiting graph level (inter-actor) parallelism, the PAS problem targets the integrated exploitation of both intra- and inter-actor parallelism for platforms in which individual actors can be parallelized across multiple processing units. We address a special case of the PAS problem in which all of the actors in the DSP application or subsystem being optimized can be parallelized. For this special case, we develop and experimentally evaluate a two-phase scheduling framework with three work flows --- particle swarm optimization with a mixed integer programming formulation, particle swarm optimization with a simulated annealing engine, and particle swarm optimization with a fast heuristic based on list scheduling. Then, we extend our scheduling framework to support general PAS problem which considers the actors cannot be parallelized

    Euclidean distance geometry and applications

    Full text link
    Euclidean distance geometry is the study of Euclidean geometry based on the concept of distance. This is useful in several applications where the input data consists of an incomplete set of distances, and the output is a set of points in Euclidean space that realizes the given distances. We survey some of the theory of Euclidean distance geometry and some of the most important applications: molecular conformation, localization of sensor networks and statics.Comment: 64 pages, 21 figure

    Soft Sensors for Process Monitoring of Complex Processes

    Get PDF
    Soft sensors are an essential component of process systems engineering schemes. While soft sensor design research is important, investigation into the relationships between soft sensors and other areas of advanced monitoring and control is crucial as well. This dissertation presents two new techniques that enhance the performance of fault detection and sensor network design by integration with soft sensor technology. In addition, a chapter is devoted to the investigation of the proper implementation of one of the most often used soft sensors. The performance advantages of these techniques are illustrated with several cases studies. First, a new approach for fault detection which involves soft sensors for process monitoring is developed. The methodology presented here deals directly with the state estimates that need to be monitored. The advantage of such an approach is that the nonlinear effect of abnormal process conditions on the state variables can be directly observed. The presented technique involves a general framework for using soft sensor design and computation of the statistics that represent normal operating conditions. Second, a method for determining the optimal placement of multiple sensors for processes described by a class of nonlinear dynamic systems is described. This approach is based upon maximizing a criterion, i.e., the determinant, applied to the empirical observability gramian in order to optimize certain properties of the process state estimates. The determinant directly accounts for redundancy of information, however, the resulting optimization problem is nontrivial to solve as it is a mixed integer nonlinear programming problem. This paper also presents a decomposition of the optimization problem such that the formulated sensor placement problem can be solved quickly and accurately on a desktop PC. Many comparative studies, often based upon simulation results, between Extended Kalman filters (EKF) and other estimation methodologies such as Moving Horizon Estimation or Unscented Kalman Filter have been published over the last few years. However, the results returned by the EKF are affected by the algorithm used for its implementation and some implementations may lead to inaccurate results. In order to address this point, this work provides a comparison of several different algorithms for implementation

    Experimental Designs, Meta-Modeling, and Meta-learning for Mixed-Factor Systems with Large Decision Spaces

    Get PDF
    Many Air Force studies require a design and analysis process that can accommodate for the computational challenges associated with complex systems, simulations, and real-world decisions. For systems with large decision spaces and a mixture of continuous, discrete, and categorical factors, nearly orthogonal-and-balanced (NOAB) designs can be used as efficient, representative subsets of all possible design points for system evaluation, where meta-models are then fitted to act as surrogates to system outputs. The mixed-integer linear programming (MILP) formulations used to construct first-order NOAB designs are extended to solve for low correlation between second-order model terms (i.e., two-way interactions and quadratics). The resulting second-order approaches are shown to improve design performance measures for second-order model parameter estimation and prediction variance as well as for protection from bias due to model misspecification with respect to second-order terms. Further extensions are developed to construct batch sequential NOAB designs, giving experimenters more flexibility by creating multiple stages of design points using different NOAB approaches, where simultaneous construction of stages is shown to outperform design augmentation overall. To reduce cost and add analytical rigor, meta-learning frameworks are developed for accurate and efficient selection of first-order NOAB designs as well as of meta-models that approximate mixed-factor systems
    • …
    corecore