11 research outputs found

    Data-Driven Optimal Sensor Placement for High-Dimensional System Using Annealing Machine

    Full text link
    We propose a novel method for solving optimal sensor placement problem for high-dimensional system using an annealing machine. The sensor points are calculated as a maximum clique problem of the graph, the edge weight of which is determined by the proper orthogonal decomposition (POD) mode obtained from data based on the fact that a high-dimensional system usually has a low-dimensional representation. Since the maximum clique problem is equivalent to the independent set problem of the complement graph, the independent set problem is solved using Fujitsu Digital Annealer. As a demonstration of the proposed method, the pressure distribution induced by the K\'arm\'an vortex street behind a square cylinder is reconstructed based on the pressure data at the calculated sensor points. The pressure distribution is measured by pressure-sensitive paint (PSP) technique, which is an optical flow diagnose method. The root mean square errors (RMSEs) between the pressure measured by pressure transducer and the reconstructed pressures (calculated from the proposed method and an existing greedy method) at the same place are compared. As the result, the similar RMSE is achieved by the proposed method using approximately 1/5 number of sensor points obtained by the existing method. This method is of great importance as a novel approach for optimal sensor placement problem and a new engineering application of an annealing machine

    Quadrature Strategies for Constructing Polynomial Approximations

    Full text link
    Finding suitable points for multivariate polynomial interpolation and approximation is a challenging task. Yet, despite this challenge, there has been tremendous research dedicated to this singular cause. In this paper, we begin by reviewing classical methods for finding suitable quadrature points for polynomial approximation in both the univariate and multivariate setting. Then, we categorize recent advances into those that propose a new sampling approach and those centered on an optimization strategy. The sampling approaches yield a favorable discretization of the domain, while the optimization methods pick a subset of the discretized samples that minimize certain objectives. While not all strategies follow this two-stage approach, most do. Sampling techniques covered include subsampling quadratures, Christoffel, induced and Monte Carlo methods. Optimization methods discussed range from linear programming ideas and Newton's method to greedy procedures from numerical linear algebra. Our exposition is aided by examples that implement some of the aforementioned strategies

    Sensor selection via convex optimization,”

    Get PDF
    Abstract-We consider the problem of choosing a set of sensor measurements, from a set of possible or potential sensor measurements, that minimizes the error in estimating some parameters. Solving this problem by evaluating the performance for each of the possible choices of sensor measurements is not practical unless and are small. In this paper, we describe a heuristic, based on convex optimization, for approximately solving this problem. Our heuristic gives a subset selection as well as a bound on the best performance that can be achieved by any selection of sensor measurements. There is no guarantee that the gap between the performance of the chosen subset and the performance bound is always small; but numerical experiments suggest that the gap is small in many cases. Our heuristic method requires on the order of 3 operations; for = 1000 possible sensors, we can carry out sensor selection in a few seconds on a 2-GHz personal computer

    Sélection des nœuds dans les réseaux de capteurs sans fil avec récolte d'énergie

    Get PDF
    RÉSUMÉ L’utilisation des réseaux de capteurs sans fil avec récolte d’énergie est une technologie de communication sans fil émergente avec une large variété d’applications telles que la surveillance de l’environnement. Maximiser le nombre d’échantillons prélevés par le collecteur de données à partir des capteurs est une approche clé afin de minimiser les incertitudes de mesure pour ces applications. Le système considéré dans le présent mémoire considère le problème de sélection des noeuds dans les réseaux de capteurs sans fil avec récolte d’énergie afin de maximiser le nombre de capteurs sélectionnés dans un scénario de liaison montante sujet aux contraintes de satisfaction d’un seuil de rapport signal sur bruit requis à la réception. Ce réseau se compose d’un ensemble de capteurs récolteurs d’énergie et un collecteur de données non-récolteur d’énergie, équipé de plusieurs antennes et responsable de la réception des données transmises par les capteurs. En utilisant un récepteur par forçage à zéro (ZF), le collecteur de données sélectionne le plus grand nombre de noeuds possible pouvant transmettre afin de maximiser la quantité reçue de l’information, tout en satisfaisant leurs contraintes de qualité de service (QoS) en terme de rapport signal-sur-bruit. Le problème est formulé comme un programme non linéaire en nombres entiers. On a également prouvé la NP-difficulté du problème. Bien que le nombre optimal de capteurs sélectionnés peut être obtenu par la recherche exhaustive, il est difficile de mettre en oeuvre cette approche dans la pratique en raison de sa complexité prohibitive. Ainsi, deux algorithmes heuristiques et efficaces en termes de complexité et performance sont proposés pour résoudre ce problème. Les résultats des simulations montrent les bonnes performances des algorithmes proposés et illustrent leur capacité d’adaptation et d’efficacité dans le contexte de la récolte d’énergie. Une étude d’équité est également réalisée afin d’évaluer l’équité des algorithmes développés et d’améliorer leurs performances sur ce niveau. Les résultats numériques montrent l’efficacité des améliorations d’équité proposées. Mots clés : récolte d’énergie, sélection des noeuds, réseaux de capteurs sans fil, récepteur « zero-forcing », NP-difficulté, équité.----------ABSTRACT The use of energy harvesting wireless sensor networks is an emerging wireless communication technology with a wide range of applications such as environment monitoring. Maximizing the number of samples collected by the sink from sensors is a key approach in order to minimize uncertainties for those applications. The considered system in this work consists of an uplink scenario with energy harvesting sensors communicating with a non-energy harvesting sink, equipped with multiple antennas, receiving data forwarded by the sensors. Using a zero-forcing (ZF) receiver, the data collector (i.e., sink) selects the largest possible set of transmitting sensor nodes to maximize the received quantity of information, while satisfying their signal-to-noise ratio quality of service (QoS) constraints. This work presents efficient and simple energy harvesting node selection algorithms in energy harvesting wireless sensor networks in order to maximize the number of selected sensors. The problem formulated as an integer non-linear program, is proved to be NP-hard. Although the optimal number of sensors can be found by exhaustive search, it is difficult to implement this approach in practice due to its prohibitive complexity. Thus, two low complexity and efficient heuristic algorithms are proposed to perform node selection decisions. Simulation results show the performance of the proposed algorithms and illustrate their adaptability and efficiency in the energy harvesting context. A fairness study is also performed in order to evaluate the fairness of the developed algorithms and improve their performances in this context. The numerical results show the efficiency of the proposed fairness improvements. Index Terms: energy harvesting (EH), node selection, wireless sensor networks, zero-forcing (ZF) receiver, NP-hardness, fairness

    Uncertainty Minimization in Robotic 3D Mapping Systems Operating in Dynamic Large-Scale Environments

    Get PDF
    This dissertation research is motivated by the potential and promise of 3D sensing technologies in safety and security applications. With specific focus on unmanned robotic mapping to aid clean-up of hazardous environments, under-vehicle inspection, automatic runway/pavement inspection and modeling of urban environments, we develop modular, multi-sensor, multi-modality robotic 3D imaging prototypes using localization/navigation hardware, laser range scanners and video cameras. While deploying our multi-modality complementary approach to pose and structure recovery in dynamic real-world operating conditions, we observe several data fusion issues that state-of-the-art methodologies are not able to handle. Different bounds on the noise model of heterogeneous sensors, the dynamism of the operating conditions and the interaction of the sensing mechanisms with the environment introduce situations where sensors can intermittently degenerate to accuracy levels lower than their design specification. This observation necessitates the derivation of methods to integrate multi-sensor data considering sensor conflict, performance degradation and potential failure during operation. Our work in this dissertation contributes the derivation of a fault-diagnosis framework inspired by information complexity theory to the data fusion literature. We implement the framework as opportunistic sensing intelligence that is able to evolve a belief policy on the sensors within the multi-agent 3D mapping systems to survive and counter concerns of failure in challenging operating conditions. The implementation of the information-theoretic framework, in addition to eliminating failed/non-functional sensors and avoiding catastrophic fusion, is able to minimize uncertainty during autonomous operation by adaptively deciding to fuse or choose believable sensors. We demonstrate our framework through experiments in multi-sensor robot state localization in large scale dynamic environments and vision-based 3D inference. Our modular hardware and software design of robotic imaging prototypes along with the opportunistic sensing intelligence provides significant improvements towards autonomous accurate photo-realistic 3D mapping and remote visualization of scenes for the motivating applications

    Informative Path Planning and Sensor Scheduling for Persistent Monitoring Tasks

    Get PDF
    In this thesis we consider two combinatorial optimization problems that relate to the field of persistent monitoring. In the first part, we extend the classic problem of finding the maximum weight Hamiltonian cycle in a graph to the case where the objective is a submodular function of the edges. We consider a greedy algorithm and a 2-matching based algorithm, and we show that they have approximation factors of 1/2+Îş and max{2/(3(2+Îş)),(2/3)(1-Îş)} respectively, where Îş is the curvature of the submodular function. Both algorithms require a number of calls to the submodular function that is cubic to the number of vertices in the graph. We then present a method to solve a multi-objective optimization consisting of both additive edge costs and submodular edge rewards. We provide simulation results to empirically evaluate the performance of the algorithms. Finally, we demonstrate an application in monitoring an environment using an autonomous mobile sensor, where the sensing reward is related to the entropy reduction of a given a set of measurements. In the second part, we study the problem of selecting sensors to obtain the most accurate state estimate of a linear system. The estimator is taken to be a Kalman filter and we attempt to optimize the a posteriori error covariance. For a finite time horizon, we show that, under certain restrictive conditions, the problem can be phrased as a submodular function optimization and that a greedy approach yields a 1-1/(e^(1-1/e))-approximation. Next, for an infinite time horizon, we characterize the exact conditions for the existence of a schedule with bounded estimation error covariance. We then present a scheduling algorithm that guarantees that the error covariance will be bounded and that the error will die out exponentially for any detectable LTI system. Simulations are provided to compare the performance of the algorithm against other known techniques

    Posicionamento cooperativo para redes sem fios heterogéneas

    Get PDF
    Doutoramento em Engenharia ElectrotécnicaFuture emerging market trends head towards positioning based services placing a new perspective on the way we obtain and exploit positioning information. On one hand, innovations in information technology and wireless communication systems enabled the development of numerous location based applications such as vehicle navigation and tracking, sensor networks applications, home automation, asset management, security and context aware location services. On the other hand, wireless networks themselves may bene t from localization information to improve the performances of di erent network layers. Location based routing, synchronization, interference cancellation are prime examples of applications where location information can be useful. Typical positioning solutions rely on measurements and exploitation of distance dependent signal metrics, such as the received signal strength, time of arrival or angle of arrival. They are cheaper and easier to implement than the dedicated positioning systems based on ngerprinting, but at the cost of accuracy. Therefore intelligent localization algorithms and signal processing techniques have to be applied to mitigate the lack of accuracy in distance estimates. Cooperation between nodes is used in cases where conventional positioning techniques do not perform well due to lack of existing infrastructure, or obstructed indoor environment. The objective is to concentrate on hybrid architecture where some nodes have points of attachment to an infrastructure, and simultaneously are interconnected via short-range ad hoc links. The availability of more capable handsets enables more innovative scenarios that take advantage of multiple radio access networks as well as peer-to-peer links for positioning. Link selection is used to optimize the tradeo between the power consumption of participating nodes and the quality of target localization. The Geometric Dilution of Precision and the Cramer-Rao Lower Bound can be used as criteria for choosing the appropriate set of anchor nodes and corresponding measurements before attempting location estimation itself. This work analyzes the existing solutions for node selection in order to improve localization performance, and proposes a novel method based on utility functions. The proposed method is then extended to mobile and heterogeneous environments. Simulations have been carried out, as well as evaluation with real measurement data. In addition, some speci c cases have been considered, such as localization in ill-conditioned scenarios and the use of negative information. The proposed approaches have shown to enhance estimation accuracy, whilst signi cantly reducing complexity, power consumption and signalling overhead.As tendências nos mercados emergentes caminham na direção dos serviços baseados em posicionamento, criando uma nova perspectiva na forma como podemos obter e utilizar informação de posicionamento. Por um lado, as inovações em tecnologias da informação e sistemas de comunicação sem fios permitiram o desenvolvimento de inúmeras aplicações baseadas em localização, tais como a navegação e monitorização de veículo, aplicações de redes de sensores, domótica, gestão de ativos, segurança e serviços de localização sensíveis ao contexto. Por outro lado, as próprias redes sem fios podem beneficiar da informação de localização dos utilizadores de forma a melhorarem as performances de diferentes camadas de rede. Routing baseado em localização, sincronização e cancelamento de interferência são os exemplos mais representativos de áreas onde a informação de localização pode ser útil. Soluções de localização típicas dependem de medições e de aproveitamento de métricas de sinal dependentes da distância, tais como a potência do sinal recebido, o tempo ou ângulo de chegada. São mais baratos e fáceis de implementar do que sistemas de localização dedicados com base em fingerprinting, com a desvantagem da perda de precisão. Consequentemente, algoritmos inteligentes de localização e técnicas de processamento de sinal têm de ser aplicados para compensar a falta de precisão das estimativas de distância. A cooperação entre nodos é usada nos casos em que as técnicas convencionais de posicionamento não têm um bom desempenho devido à inexistência de infraestrutura adequada, ou a um ambiente interior com obstruções. O objetivo é ter uma arquitetura híbrida, onde alguns nós têm pontos de ligação a uma infraestrutura e simultaneamente estão interligados através ligações ad-hoc de curto alcance. A disponibilidade de equipamentos mais capazes permite cenários mais inovadores que tiram proveito de múltiplas redes de acesso de rádio, bem como ligações peer-to-peer, para o posicionamento. A seleção de ligações é usada para otimizar o equilíbrio entre o consumo de energia dos nós participantes e da qualidade da localização do alvo. A diluição geométrica de precisão e a Cramér Rao Lower Bound podem ser utilizadas como critrio para a escolha do conjunto adequado de nodos de ancoragem e as medições correspondentes antes de realizar a tarefa de estimativa de localizaçãoo. Este trabalho analisa as soluções existentes para a seleção de nós, a fim de melhorar o desempenho de localização e propõe um novo método baseado em funções de utilidade. O método proposto é então estendido para ambientes móveis e heterogéneos. Foram realizadas simulações bem como avaliação de dados de medições reais. Além disso, alguns casos específicos foram considerados, tais como a localização em cenários mal-acondicionados e uso de informação negativa. As abordagens propostas revelaram uma melhoria na precisão da estimação, ao mesmo tempo que reduziram significativamente a complexidade do cálculo, o consumo de energia e o overhead do sinal

    Monitoring and control of transport networks using parsimonious models

    Get PDF
    The growing number of vehicles on the roads coupled with inefficient road operations have generated traffic congestion. Consequently, traffic congestion increase trip time and indirectly contributes to poor quality of life and environmental pollution. Therefore, alleviating traffic congestion, especially in urban networks, is crucial and requires efficient traffic management and control. Recently, macroscopic operational scheme has become the preferred method for monitoring and mitigating traffic congestion due its simplicity in modeling complex large-scale cities and low computational effort. The schemes are based on parsimonious models known as Macroscopic or Network Fundamental Diagram (MFD or NFD) which provides an aggregated model of urban traffic dynamics, linking network circulating flow and average density. This thesis deals with an open problems associated with two main applications of NFD in transportation networks, namely: 1) Traffic monitoring and 2) Traffic flow control. Two parts of the thesis concentrates on each application separately. The implementation of NFD in perimeter control strategy requires an accurate estimation of NFD where its measurements are reflected from sensors located at appropriate locations in the network. First part of the thesis elaborates a new approach for studying sensor selection for the development of operational or sparse-measurement NFD, with less number of sensor and associated measurements. An information-theoretic based framework is proposed for the optimal sensor selection across a transport network to assist an efficient model selection and construction of sparse-measurement NFD. For the optimal sensor selection, a generalised set covering integer programming (GIP) is developed. Under this framework, several tools to assess GIP solutions are uitilised. First, a correlation between variables is introduced as a ''distance'' metric rather than spatial distance to provide sufficient coverage and information accuracy. Second, the optimal cost of GIP problem is used to determine minimum number of sensors. Third, the relative entropy or Kullback-Leibler divergence is used to measure the dissimilarity between probability mass functions corresponding to different solutions of the GIP program. The proposed framework is evaluated with experimental loop-detector data of one week from central business district with fifty-eight sensors. Results reveal that the obtained sparse-measurement diagrams from the selected models adequately preserve the shape and the main features similar to a full-measurement diagram. Specifically, the coverage level of 24% of the network demonstrated the effectiveness of GIP framework. Simulation results also disclose the Kullback-Liebler divergence as more generic and reliable metric of information loss. Such framework can be of great importance towards a cost-effective sensors installation and maintenance whilst improving the estimation of NFD for better monitoring and control strategy. Second part of the thesis discusses the traffic flow control problem involving single input flow distribution from perimeter control strategy towards number of gated links at the periphery of the network. It if often assumed that input flow ordered by perimeter control strategy should be equally distributed to a number of candidate junctions. There has not been considerable research into limited storage capacity/different geometric characteristics at gated links as well as equity properties for driver. A control scheme for the multi-gated perimeter flow control (MGC) problem is developed. The scheme determines optimally distributed input flows for a number of gates located at the periphery of a protected network area. A parsimonious model is employed to describe the traffic dynamics of the protected network. To describe traffic dynamics outside of the protected area, the basic state space model is augmented with additional state variables for the queues at store-and-forward origin links of the periphery. The perimeter traffic flow control problem is formulated as a convex optimal control problem with constrained control and state variables. For the application of the proposed scheme in real time, the optimal control problem may be embedded in a rolling-horizon scheme using the current state of the whole system as the initial state as well as predicted demand flows at origin/entrance links. This part also offers flow allocation policies for single-region network without considering entrance link dynamics namely capacity-based flow allocation policy and optimisation-based flow allocation policy. Simulation results are carried for a protected network area of downtown San Francisco with fifteen gates of different geometric characteristics. Results demonstrate the efficiency and equity properties of the MGC approach to better manage excessive queues outside of the protected network area and optimally distribute the input flows. The MGC outperforms the other approaches in terms of serving more trips in protected network as well as shorter queues at gated links. Such framework is particularly of interest to city managers because the optimal flow distribution may influence the network throughput hence serves maximum number of network users

    Variable selection for wind turbine condition monitoring and fault detection system

    Get PDF
    With the fast growth in wind energy, the performance and reliability of the wind power generation system has become a major issue in order to achieve cost-effective generation. Integration of condition monitoring system (CMS) in the wind turbine has been considered as the most viable solution, which enhances maintenance scheduling and achieving a more reliable system. However, for an effective CMS, large number of sensors and high sampling frequency are required, resulting in a large amount of data to be generated. This has become a burden for the CMS and the fault detection system. This thesis focuses on the development of variable selection algorithm, such that the dimensionality of the monitoring data can be reduced, while useful information in relation to the later fault diagnosis and prognosis is preserved. The research started with a background and review of the current status of CMS in wind energy. Then, simulation of the wind turbine systems is carried out in order to generate useful monitoring data, including both healthy and faulty conditions. Variable selection algorithms based on multivariate principal component analysis are proposed at the system level. The proposed method is then further extended by introducing additional criterion during the selection process, where the retained variables are targeted to a specific fault. Further analyses of the retained variables are carried out, and it has shown that fault features are present in the dataset with reduced dimensionality. Two detection algorithms are then proposed utilising the datasets obtained from the selection algorithm. The algorithms allow accurate detection, identification and severity estimation of anomalies from simulation data and supervisory control and data acquisition data from an operational wind farm. Finally an experimental wind turbine test rig is designed and constructed. Experimental monitoring data under healthy and faulty conditions is obtained to further validate the proposed detection algorithms

    Resource-aware plan recognition in instrumented environments

    Get PDF
    This thesis addresses the problem of plan recognition in instrumented environments, which is to infer an agent';s plans by observing its behavior. In instrumented environments such observations are made by physical sensors. This introduces specific challenges, of which the following two are considered in this thesis: - Physical sensors often observe state information instead of actions. As classical plan recognition approaches usually can only deal with action observations, this requires a cumbersome and error-prone inference of executed actions from observed states. - Due to limited physical resources of the environment it is often not possible to run all sensors at the same time, thus sensor selection techniques have to be applied. Current plan recognition approaches are not able to support the environment in selecting relevant subsets of sensors. This thesis proposes a two-stage approach to solve the problems described above. Firstly, a DBN-based plan recognition approach is presented which allows for the explicit representation and consideration of state knowledge. Secondly, a POMDP-based utility model for observation sources is presented which can be used with generic utility-based sensor selection algorithms. Further contributions include the presentation of a software toolkit that realizes plan recognition and sensor selection in instrumented environments, and an empirical evaluation of the validity and performance of the proposed models.Diese Arbeit behandelt das Problem der Planerkennung in instrumentierten Umgebungen. Ziel ist dabei das Erschließen der Pläne des Nutzers anhand der Beobachtung seiner Handlungen. In instrumentierten Umgebungen erfolgt diese Beobachtung über physische Sensoren. Dies wirft spezifische Probleme auf, von denen zwei in dieser Arbeit näher betrachtet werden: - Physische Sensoren beobachten in der Regel Zustände anstelle direkter Nutzeraktionen. Klassische Planerkennungsverfahren basieren jedoch auf der Beobachtung von Aktionen, was bisher eine aufwendige und fehlerträchtige Ableitung von Aktionen aus Zustandsbeobachtungen notwendig macht. - Aufgrund beschränkter Resourcen der Umgebung ist es oft nicht möglich alle Sensoren gleichzeitig zu aktivieren. Aktuelle Planerkennungsverfahren bieten keine Möglichkeit, die Umgebung bei der Auswahl einer relevanten Teilmenge von Sensoren zu unterstützen. Diese Arbeit beschreibt einen zweistufigen Ansatz zur Lösung der genannten Probleme. Zunächst wird ein DBN-basiertes Planerkennungsverfahren vorgestellt, das Zustandswissen explizit repräsentiert und in Schlussfolgerungen berücksichtigt. Dieses Verfahren bildet die Basis für ein POMDP-basiertes Nutzenmodell für Beobachtungsquellen, das für den Zweck der Sensorauswahl genutzt werden kann. Des Weiteren wird ein Toolkit zur Realisierung von Planerkennungs- und Sensorauswahlfunktionen vorgestellt sowie die Gültigkeit und Performanz der vorgestellten Modelle in einer empirischen Studie evaluiert
    corecore