177 research outputs found

    A network tomography approach for traffic monitoring in smart cities

    Get PDF
    Various urban planning and managing activities required by a Smart City are feasible because of traffic monitoring. As such, the thesis proposes a network tomography-based approach that can be applied to road networks to achieve a cost-efficient, flexible, and scalable monitor deployment. Due to the algebraic approach of network tomography, the selection of monitoring intersections can be solved through the use of matrices, with its rows representing paths between two intersections, and its columns representing links in the road network. Because the goal of the algorithm is to provide a cost-efficient, minimum error, and high coverage monitor set, this problem can be translated into an optimization problem over a matroid, which can be solved efficiently by a greedy algorithm. Also as supplementary, the approach is capable of handling noisy measurements and a measurement-to-path matching. The approach proves a low error and a 90% coverage with only 20% nodes selected as monitors in a downtown San Francisco, CA topology --Abstract, page iv

    A Network Tomography Approach for Traffic Monitoring in Smart Cities

    Get PDF
    Traffic monitoring is a key enabler for several planning and management activities of a Smart City. However, traditional techniques are often not cost efficient, flexible, and scalable. This paper proposes an approach to traffic monitoring that does not rely on probe vehicles, nor requires vehicle localization through GPS. Conversely, it exploits just a limited number of cameras placed at road intersections to measure car end-to-end traveling times. We model the problem within the theoretical framework of network tomography, in order to infer the traveling times of all individual road segments in the road network. We specifically deal with the potential presence of noisy measurements, and the unpredictability of vehicles paths. Moreover, we address the issue of optimally placing the monitoring cameras in order to maximize coverage, while minimizing the inference error, and the overall cost. We provide extensive experimental assessment on the topology of downtown San Francisco, CA, USA, using real measurements obtained through the Google Maps APIs, and on realistic synthetic networks. Our approach provides a very low error in estimating the traveling times over 95% of all roads even when as few as 20% of road intersections are equipped with cameras

    Adaptive Loss Inference Using Unicast End-to-End Measurements

    Get PDF
    We address the problem of inferring link loss rates from unicast end-to-end measurements on the basis of network tomography. Because measurement probes will incur additional traffic overheads, most tomography-based approaches perform the inference by collecting the measurements only on selected paths to reduce the overhead. However, all previous approaches select paths offline, which will inevitably miss many potential identifiable links, whose loss rates should be unbiasedly determined. Furthermore, if element failures exist, an appreciable number of the selected paths may become unavailable. In this paper, we creatively propose an adaptive loss inference approach in which the paths are selected sequentially depending on the previous measurement results. In each round, we compute the loss rates of links that can be unbiasedly determined based on the current measurement results and remove them from the system. Meanwhile, we locate the most possible failures based on the current measurement outcomes to avoid selecting unavailable paths in subsequent rounds. In this way, all identifiable and potential identifiable links can be determined unbiasedly using only 20% of all available end-to-end measurements. Compared with a previous classical approach through extensive simulations, the results strongly confirm the promising performance of our proposed approach

    Network coding for network tomography

    Get PDF

    Practical Network Tomography

    Get PDF
    In this thesis, we investigate methods for the practical and accurate localization of Internet performance problems. The methods we propose belong to the field of network loss tomography, that is, they infer the loss characteristics of links from end-to-end measurements. The existing versions of the problem of network loss tomography are ill-posed, hence, tomographic algorithms that attempt to solve them resort to making various assumptions, and as these assumptions do not usually hold in practice, the information provided by the algorithms might be inaccurate. We argue, therefore, for tomographic algorithms that work under weak, realistic assumptions. We first propose an algorithm that infers the loss rates of network links from end-to-end measurements. Inspired by previous work, we design an algorithm that gains initial information about the network by computing the variances of links' loss rates and by using these variances as an indication of the congestion level of links, i.e., the more congested the link, the higher the variance of its loss rate. Its novelty lies in the way it uses this information – to identify and characterize the maximum set of links whose loss rates can be accurately inferred from end-to-end measurements. We show that our algorithm performs significantly better than the existing alternatives, and that this advantage increases with the number of congested links in the network. Furthermore, we validate its performance by using an "Internet tomographer" that runs on a real testbed. Second, we show that it is feasible to perform network loss tomography in the presence of "link correlations," i.e., when the losses that occur on one link might depend on the losses that occur on other links in the network. More precisely, we formally derive the necessary and sufficient condition under which the probability that each set of links is congested is statistically identifiable from end-to-end measurements even in the presence of link correlations. In doing so, we challenge one of the popular assumptions in network loss tomography, specifically, the assumption that all links are independent. The model we propose assumes we know which links are most likely to be correlated, but it does not assume any knowledge about the nature or the degree of their correlations. In practice, we consider that all links in the same local area network or the same administrative domain are potentially correlated, because they could be sharing physical links, network equipment, or even management processes. Finally, we design a practical algorithm that solves "Congestion Probability Inference" even in the presence of link correlations, i.e., it infers the probability that each set of links is congested even when the losses that occur on one link might depend on the losses that occur on other links in the network. We model Congestion Probability Inference as a system of linear equations where each equation corresponds to a set of paths. Because it is infeasible to consider an equation for each set of paths in the network, our algorithm finds the maximum number of linearly independent equations by selecting particular sets of paths based on our theoretical results. On the one hand, the information provided by our algorithm is less than that provided by the existing alternatives that infer either the loss rates or the congestion statuses of links, i.e., we only learn how often each set of links is congested, as opposed to how many packets were lost at each link, or to which particular links were congested when. On the other hand, this information is more useful in practice because our algorithm works under assumptions weaker than those required by the existing alternatives, and we experimentally show that it is accurate under challenging network conditions such as non-stationary network dynamics and sparse topologies

    From data and structure to models and controllers

    Get PDF
    Systems and control theory deals with analyzing dynamical systems and shaping their behavior by means of control. Dynamical systems are widespread, and control theory therefore has numerous applications ranging from the control of aircraft and spacecraft to chemical process control. During the last decades, a series of remarkable new control techniques have been developed. The majority of these techniques rely on mathematical models of the to-be-controlled system. However, the growing complexity of modern engineering systems complicates mathematical modeling. In this thesis, we therefore propose new methods to analyze and control dynamical systems without relying on a given system model. Models are thereby replaced by two other ingredients, namely measured data and system structure. In the first part of the thesis, we consider the problem of data-driven control. This problem involves the development of controllers for a dynamical system, purely on the basis of data. We consider both stabilizing controllers, and controllers that minimize a given cost function. Secondly, we focus on networked systems. A networked system is a collection of interconnected dynamical subsystems. For this type of systems, our aim is to reconstruct the interactions between subsystems on the basis of data. Finally, we consider the problem of assessing controllability of a dynamical system using its structure. We provide conditions under which this is possible for a general class of structured systems

    A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications

    Full text link
    Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of technology with the potential of far-reaching impact in domains ranging from medical over industrial to artistic, gaming, and military. Today, these emerging BCI applications are typically still at early technology readiness levels, but because BCIs create novel, technical communication channels for the human brain, they have raised privacy and security concerns. To mitigate such risks, a large body of countermeasures has been proposed in the literature, but a general framework is lacking which would describe how privacy and security of BCI applications can be protected by design, i.e., already as an integral part of the early BCI design process, in a systematic manner, and allowing suitable depth of analysis for different contexts such as commercial BCI product development vs. academic research and lab prototypes. Here we propose the adoption of recent systems-engineering methodologies for privacy threat modeling, risk assessment, and privacy engineering to the BCI field. These methodologies address privacy and security concerns in a more systematic and holistic way than previous approaches, and provide reusable patterns on how to move from principles to actions. We apply these methodologies to BCI and data flows and derive a generic, extensible, and actionable framework for brain-privacy-preserving cybersecurity in BCI applications. This framework is designed for flexible application to the wide range of current and future BCI applications. We also propose a range of novel privacy-by-design features for BCIs, with an emphasis on features promoting BCI transparency as a prerequisite for informational self-determination of BCI users, as well as design features for ensuring BCI user autonomy. We anticipate that our framework will contribute to the development of privacy-respecting, trustworthy BCI technologies

    A Control Systems Perspective to Condition Monitoring and Fault Diagnosis

    Get PDF
    Modern industrial processors, engineering systems and structures, have grown significantly in complexity and in scale during the recent years. Therefore, there is an increase in the demand for automatic processors, to avoid faults and severe break downs, through predictive maintenance. In this context, the research into nonlinear systems analysis has attained much interest in recent years as linear models cannot be used to represent some of these systems. In the field of control systems, the analysis of such systems is conducted in the frequency domain using methods of Frequency Response Analysis. Generalised Frequency Response Functions (GFRFs) and the Nonlinear Output Frequency Response Functions (NOFRFs) are Frequency Response Analysis techniques used for the analysis of nonlinear dynamical behaviour in the frequency domain. The problem of Condition Monitoring and Fault Diagnosis has been investigated in the perspective of modelling, signal processing and multivariate statistical analysis, data-driven methods such as neural networks have gained significant popularity. This is because possible faulty conditions related to complex systems are often difficult to interpret. In such a background, recently, a new data-driven approach based on a systems perspective has been proposed. This approach uses a controls systems analysis method of System Identification and Frequency Response Analysis and has been shown before as a potential technique. However, this approach has certain practical concerns regarding real-world applications. Motivated by these concerns in this thesis, the following contributions are put forward: 1. The method of evaluating NOFRFs, using input-output data of a nonlinear system may experience numerical errors. This is a major concern, hence the development of a method to overcome these numerical issues effectively. 2. Frequency Response Analysis cannot be used in its current state for nonlinear systems that exhibit severe nonlinear behaviour. Although theoretically, it has been argued that this is possible, even though, it has been impossible in a practical point of view. Therefore, the possibility and the manner in which Frequency Response Analysis can be conducted for these types of systems is presented. 3. Development of a System Identification methodology to overcome the issues of inadequately exciting inputs and appropriately capturing system dynamics under general circumstances of Condition Monitoring and Fault Diagnosis. In addition to the above, the novel implementation of a control systems analysis approach is implemented in characterising corrosion, crack depth and crack length on metal samples. The approach is applied to the data collected, using a newly proposed non-invasive Structural Health Monitoring method called RFID (Radio Frequency IDentification) wireless eddy current probing. The control systems analysis approach along with the RFID wireless eddy current probing method shows the clear potential of being a new technology in non-invasive Structural Health Monitoring systems
    • …
    corecore