824 research outputs found

    Asymptotically Efficient Quasi-Newton Type Identification with Quantized Observations Under Bounded Persistent Excitations

    Full text link
    This paper is concerned with the optimal identification problem of dynamical systems in which only quantized output observations are available under the assumption of fixed thresholds and bounded persistent excitations. Based on a time-varying projection, a weighted Quasi-Newton type projection (WQNP) algorithm is proposed. With some mild conditions on the weight coefficients, the algorithm is proved to be mean square and almost surely convergent, and the convergence rate can be the reciprocal of the number of observations, which is the same order as the optimal estimate under accurate measurements. Furthermore, inspired by the structure of the Cramer-Rao lower bound, an information-based identification (IBID) algorithm is constructed with adaptive design about weight coefficients of the WQNP algorithm, where the weight coefficients are related to the parameter estimates which leads to the essential difficulty of algorithm analysis. Beyond the convergence properties, this paper demonstrates that the IBID algorithm tends asymptotically to the Cramer-Rao lower bound, and hence is asymptotically efficient. Numerical examples are simulated to show the effectiveness of the information-based identification algorithm.Comment: 16 pages, 3 figures, submitted to Automatic

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor

    Large deviations of stochastic systems and applications

    Get PDF
    This dissertation focuses on large deviations of stochastic systems with applications to optimal control and system identification. It encompasses analysis of two-time-scale Markov processes and system identification with regular and quantized data. First, we develops large deviations principles for systems driven by continuous-time Markov chains with twotime scales and related optimal control problems. A distinct feature of our setup is that the Markov chain under consideration is time dependent or inhomogeneous. The use of two time-scale formulation stems from the effort of reducing computational complexity in a wide variety of applications in control, optimization, and systems theory. Starting with a rapidly fluctuating Markovian system, under irreducibility conditions, both large deviations upper and lower bounds are established first for a fixed terminal time and then for time-varying dynamic systems. Then the results are applied to certain dynamic systems and LQ control problems. Second, we study large deviations for identifications systems. Traditional system identification concentrates on convergence and convergence rates of estimates in mean squares, in distribution, or in a strong sense. For system diagnosis and complexity analysis, however, it is essential to understand the probabilities of identification errors over a finite data window. This paper investigates identification errors in a large deviations framework. By considering both space complexity in terms of quantization levels and time complexity with respect to data window sizes, this study provides a new perspective to understand the fundamental relationship between probabilistic errors and resources that represent data sizes in computer algorithms, sample sizes in statistical analysis, channel bandwidths in communications, etc. This relationship is derived by establishing the large deviations principle for quantized identification that links binary-valued data at one end and regular sensors at the other. Under some mild conditions, we obtain large deviations upper and lower bounds. Our results accommodate independent and identically distributed noise sequences, as well as more general classes of mixing-type noise sequences. Numerical examples are provided to illustrate the theoretical results

    Bibliographic Review on Distributed Kalman Filtering

    Get PDF
    In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area

    Reliable Inference from Unreliable Agents

    Get PDF
    Distributed inference using multiple sensors has been an active area of research since the emergence of wireless sensor networks (WSNs). Several researchers have addressed the design issues to ensure optimal inference performance in such networks. The central goal of this thesis is to analyze distributed inference systems with potentially unreliable components and design strategies to ensure reliable inference in such systems. The inference process can be that of detection or estimation or classification, and the components/agents in the system can be sensors and/or humans. The system components can be unreliable due to a variety of reasons: faulty sensors, security attacks causing sensors to send falsified information, or unskilled human workers sending imperfect information. This thesis first quantifies the effect of such unreliable agents on the inference performance of the network and then designs schemes that ensure a reliable overall inference. In the first part of this thesis, we study the case when only sensors are present in the system, referred to as sensor networks. For sensor networks, the presence of malicious sensors, referred to as Byzantines, are considered. Byzantines are sensors that inject false information into the system. In such systems, the effect of Byzantines on the overall inference performance is characterized in terms of the optimal attack strategies. Game-theoretic formulations are explored to analyze two-player interactions. Next, Byzantine mitigation schemes are designed that address the problem from the system\u27s perspective. These mitigation schemes are of two kinds: Byzantine identification schemes and Byzantine tolerant schemes. Using learning based techniques, Byzantine identification schemes are designed that learn the identity of Byzantines in the network and use this information to improve system performance. When such schemes are not possible, Byzantine tolerant schemes using error-correcting codes are developed that tolerate the effect of Byzantines and maintain good performance in the network. Error-correcting codes help in correcting the erroneous information from these Byzantines and thereby counter their attack. The second line of research in this thesis considers humans-only networks, referred to as human networks. A similar research strategy is adopted for human networks where, the effect of unskilled humans sharing beliefs with a central observer called \emph{CEO} is analyzed, and the loss in performance due to the presence of such unskilled humans is characterized. This problem falls under the family of problems in information theory literature referred to as the \emph{CEO Problem}, but for belief sharing. The asymptotic behavior of the minimum achievable mean squared error distortion at the CEO is studied in the limit when the number of agents LL and the sum rate RR tend to infinity. An intermediate regime of performance between the exponential behavior in discrete CEO problems and the 1/R1/R behavior in Gaussian CEO problems is established. This result can be summarized as the fact that sharing beliefs (uniform) is fundamentally easier in terms of convergence rate than sharing measurements (Gaussian), but sharing decisions is even easier (discrete). Besides theoretical analysis, experimental results are reported for experiments designed in collaboration with cognitive psychologists to understand the behavior of humans in the network. The act of fusing decisions from multiple agents is observed for humans and the behavior is statistically modeled using hierarchical Bayesian models. The implications of such modeling on the design of large human-machine systems is discussed. Furthermore, an error-correcting codes based scheme is proposed to improve system performance in the presence of unreliable humans in the inference process. For a crowdsourcing system consisting of unskilled human workers providing unreliable responses, the scheme helps in designing easy-to-perform tasks and also mitigates the effect of erroneous data. The benefits of using the proposed approach in comparison to the majority voting based approach are highlighted using simulated and real datasets. In the final part of the thesis, a human-machine inference framework is developed where humans and machines interact to perform complex tasks in a faster and more efficient manner. A mathematical framework is built to understand the benefits of human-machine collaboration. Such a study is extremely important for current scenarios where humans and machines are constantly interacting with each other to perform even the simplest of tasks. While machines perform best in some tasks, humans still give better results in tasks such as identifying new patterns. By using humans and machines together, one can extract complete information about a phenomenon of interest. Such an architecture, referred to as Human-Machine Inference Networks (HuMaINs), provides promising results for the two cases of human-machine collaboration: \emph{machine as a coach} and \emph{machine as a colleague}. For simple systems, we demonstrate tangible performance gains by such a collaboration which provides design modules for larger, and more complex human-machine systems. However, the details of such larger systems needs to be further explored

    Energy-driven techniques for massive machine-type communications

    Get PDF
    In the last few years, a lot of effort has been put into the development of the fifth generation of cellular networks (5G). Given the vast heterogeneity of devices coexisting in these networks, new approaches have been sought to meet all requirements (e.g., data rate, coverage, delay, etc.). Within that framework, massive machine-type communications (mMTC) emerge as a promising candidate to enable many Internet of Things applications. mMTC define a type of systems where large sets of simple and battery-constrained devices transmit short data packets simultaneously. Unlike other 5G use cases, in mMTC, a low cost and power consumption are extensively pursued. Due to these specifications, typical humantype communications (HTC) solutions fail in providing a good service. In this dissertation, we focus on the design of energy-driven techniques for extending the lifetime of mMTC terminals. Both uplink (UL) and downlink (DL) stages are addressed, with special attention to the traffic models and spatial distribution of the devices. More specifically, we analyze a setup where groups of randomly deployed sensors send their (possibly correlated) observations to a collector node using different multiple access schemes. Depending on their activity, information might be transmitted either on a regular or sporadic basis. In that sense, we explore resource allocation, data compression, and device selection strategies to reduce the energy consumption in the UL. To further improve the system performance, we also study medium access control protocols and interference management techniques that take into account the large connectivity in these networks. On the contrary, in the DL, we concentrate on the support of wireless powered networks through different types of energy supply mechanisms, for which proper transmission schemes are derived. Additionally, for a better representation of current 5G deployments, the presence of HTC terminals is also included. Finally, to evaluate our proposals, we present several numerical simulations following standard guidelines. In line with that, we also compare our approaches with state-of-the-art solutions. Overall, results show that the power consumption in the UL can be reduced with still good performance and that the battery lifetimes can be improved thanks to the DL strategies.En els últims anys, s'han dedicat molts esforços al desenvolupament de la cinquena generació de telefonia mòbil (5G). Donada la gran heterogeneïtat de dispositius coexistint en aquestes xarxes, s'han buscat nous mètodes per satisfer tots els requisits (velocitat de dades, cobertura, retard, etc.). En aquest marc, les massive machine-type communications (mMTC) sorgeixen com a candidates prometedores per fer possible moltes aplicacions del Internet of Things. Les mMTC defineixen un tipus de sistemes en els quals grans conjunts de dispositius senzills i amb poca bateria, transmeten simultàniament paquets de dades curts. A diferència d'altres casos d'ús del 5G, en mMTC es persegueix un cost i un consum d'energia baixos. A causa d'aquestes especificacions, les solucions típiques de les human-type communications (HTC) no aconsegueixen proporcionar un bon servei. En aquesta tesi, ens centrem en el disseny de tècniques basades en l'energia per allargar la vida útil dels terminals mMTC. S'aborden tant les etapes del uplink (UL) com les del downlink (DL), amb especial atenció als models de trànsit i a la distribució espacial dels dispositius. Més concretament, analitzem un escenari en el qual grups de sensors desplegats aleatòriament, envien les seves observacions (possiblement correlades) a un node col·lector utilitzant diferents esquemes d'accés múltiple. Depenent de la seva activitat, la informació es pot transmetre de manera regular o esporàdica. En aquest sentit, explorem estratègies d'assignació de recursos, compressió de dades, i selecció de dispositius per reduir el consum d'energia en el UL. Per millorar encara més el rendiment del sistema, també estudiem protocols de control d'accés al medi i tècniques de gestió d'interferències que tinguin en compte la gran connectivitat d'aquestes xarxes. Per contra, en el DL, ens centrem en el suport de les wireless powered networks mitjançant diferents mecanismes de subministrament d'energia, per als quals es deriven esquemes de transmissió adequats. A més, per una millor representació dels desplegaments 5G actuals, també s'inclou la presència de terminals HTC. Finalment, per avaluar les nostres propostes, presentem diverses simulacions numèriques seguint pautes estandarditzades. En aquesta línia, també comparem els nostres enfocaments amb les solucions de l'estat de l'art. En general, els resultats mostren que el consum d'energia en el UL pot reduir-se amb un bon rendiment i que la durada de la bateria pot millorar-se gràcies a les estratègies del DL.En los últimos años, se han dedicado muchos esfuerzos al desarrollo de la quinta generación de telefonía móvil (5G). Dada la gran heterogeneidad de dispositivos coexistiendo en estas redes, se han buscado nuevos métodos para satisfacer todos los requisitos (velocidad de datos, cobertura, retardo, etc.). En este marco, las massive machine-type communications (mMTC) surgen como candidatas prometedoras para hacer posible muchas aplicaciones del Internet of Things. Las mMTC definen un tipo de sistemas en los cuales grandes conjuntos de dispositivos sencillos y con poca batería, transmiten simultáneamente paquetes de datos cortos. A diferencia de otros casos de uso del 5G, en mMTC se persigue un coste y un consumo de energía bajos. A causa de estas especificaciones, las soluciones típicas de las human-type communications (HTC) no consiguen proporcionar un buen servicio. En esta tesis, nos centramos en el diseño de técnicas basadas en la energía para alargar la vida ´útil de los terminales mMTC. Se abordan tanto las etapas del uplink (UL) como las del downlink (DL), con especial atención a los modelos de tráfico y a la distribución espacial de los dispositivos. Más concretamente, analizamos un escenario en el cual grupos de sensores desplegados aleatoriamente, envían sus observaciones (posiblemente correladas) a un nodo colector utilizando diferentes esquemas de acceso múltiple. Dependiendo de su actividad, la información se puede transmitir de manera regular o esporádica. En este sentido, exploramos estrategias de asignación de recursos, compresión de datos, y selección de dispositivos para reducir el consumo de energía en el UL. Para mejorar todavía más el rendimiento del sistema, también estudiamos protocolos de control de acceso al medio y técnicas de gestión de interferencias que tengan en cuenta la gran conectividad de estas redes. Por el contrario, en el DL, nos centramos en el soporte de las wireless powered networks mediante diferentes mecanismos de suministro de energía, para los cuales se derivan esquemas de transmisión adecuados. Además, para una mejor representación de los despliegues 5G actuales, también se incluye la presencia de terminales HTC. Finalmente, para evaluar nuestras propuestas, presentamos varias simulaciones numéricas siguiendo pautas estandarizadas. En esta línea, también comparamos nuestros enfoques con las soluciones del estado del arte. En general, los resultados muestran que el consumo de energía en el UL puede reducirse con un buen rendimiento y que la duración de la batería puede mejorarse gracias a las estrategias del DLPostprint (published version

    Context-Aware Sensor Fusion For Securing Cyber-Physical Systems

    Get PDF
    The goal of this dissertation is to provide detection and estimation techniques in order to ensure the safety and security of modern Cyber-Physical Systems (CPS) even in the presence of arbitrary sensors faults and attacks. We leverage the fact that modern CPS are equipped with various sensors that provide redundant information about the system\u27s state. In such a setting, the system can limit its dependence on any individual sensor, thereby providing guarantees about its safety even in the presence of arbitrary faults and attacks. In order to address the problem of safety detection, we develop sensor fusion techniques that make use of the sensor redundancy available in modern CPS. First of all, we develop a multidimensional sensor fusion algorithm that outputs a bounded fusion set which is guaranteed to contain the true state even in the presence of attacks and faults. Furthermore, we provide two approaches for strengthening sensor fusion\u27s worst-case guarantees: 1) incorporating historical measurements as well as 2) analyzing sensor transmission schedules (e.g., in a time-triggered system using a shared bus) in order to minimize the attacker\u27s available information and impact on the system. In addition, we modify the sensor fusion algorithm in order to provide guarantees even when sensors might experience transient faults in addition to attacks. Finally, we develop an attack detection technique (also in the presence of transient faults) in order to discard attacked sensors. In addition to standard plant sensors, we note that modern CPS also have access to multiple environment sensors that provide information about the system\u27s context (e.g., a camera recognizing a nearby building). Since these context measurements are related to the system\u27s state, they can be used for estimation and detection purposes, similar to standard measurements. In this dissertation, we first develop a nominal context-aware filter (i.e., with no faults or attacks) for binary context measurements (e.g., a building detection). Finally, we develop a technique for incorporating context measurements into sensor fusion, thus providing guarantees about system safety even in cases where more than half of standard sensors might be under attack

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    1-Bit processing based model predictive control for fractionated satellite missions

    Get PDF
    In this thesis, a 1-bit processing based Model Predictive Control (OBMPC) structure is proposed for a fractionated satellite attitude control mission. Despite the appealing advantages of the MPC algorithm towards constrained MIMO control applications, implementing the MPC algorithm onboard a small satellite is certainly challenging due to the limited onboard resources. The proposed design is based on the 1-bit processing concept, which takes advantage of the affine relation between the 1-bit state feedback and multi-bit parameters to implement a multiplier free MPC controller. As multipliers are the major power consumer in online optimization, the OBMPC structure is proven to be more efficient in comparison to the conventional MPC implementation in term of power and circuit complexity. The system is in digital control nature, affected by quantization noise introduced by Δ∑ modulators. The stability issues and practical design criteria are also discussed in this work. Some other aspects are considered in this work to complete the control system. Firstly, the implementation of the OBMPC system relies on the 1-bit state feedbacks. Hence, 1-bit sensing components are needed to implement the OBMPC system. While the ∆∑ modulator based Microelectromechanical systems (MEMS) gyroscope is considered in this work, it is possible to implement this concept into other sensing components. Secondly, as the proposed attitude mission is based on the wireless inter-satellite link (ISL), a state estimator is required. However, conventional state estimators will once again introduce multi-bit signals, and compromise the simple, direct implementation of the OBMPC controller. Therefore, the 1-bit state estimator is also designed in this work to satisfy the requirements of the proposed fractionated attitude control mission. The simulation for the OBMPC is based on a 2U CubeSat model in a fractionated satellite structure, in which the payload and actuators are separated from the controller and controlled via the ISL. Matlab simulations and FPGA implementation based performance analysis shows that the OBMPC is feasible for fractionated satellite missions and is advantageous over the conventional MPC controllers
    corecore