116 research outputs found

    A network tomography approach for traffic monitoring in smart cities

    Get PDF
    Various urban planning and managing activities required by a Smart City are feasible because of traffic monitoring. As such, the thesis proposes a network tomography-based approach that can be applied to road networks to achieve a cost-efficient, flexible, and scalable monitor deployment. Due to the algebraic approach of network tomography, the selection of monitoring intersections can be solved through the use of matrices, with its rows representing paths between two intersections, and its columns representing links in the road network. Because the goal of the algorithm is to provide a cost-efficient, minimum error, and high coverage monitor set, this problem can be translated into an optimization problem over a matroid, which can be solved efficiently by a greedy algorithm. Also as supplementary, the approach is capable of handling noisy measurements and a measurement-to-path matching. The approach proves a low error and a 90% coverage with only 20% nodes selected as monitors in a downtown San Francisco, CA topology --Abstract, page iv

    A Network Tomography Approach for Traffic Monitoring in Smart Cities

    Get PDF
    Traffic monitoring is a key enabler for several planning and management activities of a Smart City. However, traditional techniques are often not cost efficient, flexible, and scalable. This paper proposes an approach to traffic monitoring that does not rely on probe vehicles, nor requires vehicle localization through GPS. Conversely, it exploits just a limited number of cameras placed at road intersections to measure car end-to-end traveling times. We model the problem within the theoretical framework of network tomography, in order to infer the traveling times of all individual road segments in the road network. We specifically deal with the potential presence of noisy measurements, and the unpredictability of vehicles paths. Moreover, we address the issue of optimally placing the monitoring cameras in order to maximize coverage, while minimizing the inference error, and the overall cost. We provide extensive experimental assessment on the topology of downtown San Francisco, CA, USA, using real measurements obtained through the Google Maps APIs, and on realistic synthetic networks. Our approach provides a very low error in estimating the traveling times over 95% of all roads even when as few as 20% of road intersections are equipped with cameras

    Adaptive Loss Inference Using Unicast End-to-End Measurements

    Get PDF
    We address the problem of inferring link loss rates from unicast end-to-end measurements on the basis of network tomography. Because measurement probes will incur additional traffic overheads, most tomography-based approaches perform the inference by collecting the measurements only on selected paths to reduce the overhead. However, all previous approaches select paths offline, which will inevitably miss many potential identifiable links, whose loss rates should be unbiasedly determined. Furthermore, if element failures exist, an appreciable number of the selected paths may become unavailable. In this paper, we creatively propose an adaptive loss inference approach in which the paths are selected sequentially depending on the previous measurement results. In each round, we compute the loss rates of links that can be unbiasedly determined based on the current measurement results and remove them from the system. Meanwhile, we locate the most possible failures based on the current measurement outcomes to avoid selecting unavailable paths in subsequent rounds. In this way, all identifiable and potential identifiable links can be determined unbiasedly using only 20% of all available end-to-end measurements. Compared with a previous classical approach through extensive simulations, the results strongly confirm the promising performance of our proposed approach

    Network coding for network tomography

    Get PDF

    Practical Network Tomography

    Get PDF
    In this thesis, we investigate methods for the practical and accurate localization of Internet performance problems. The methods we propose belong to the field of network loss tomography, that is, they infer the loss characteristics of links from end-to-end measurements. The existing versions of the problem of network loss tomography are ill-posed, hence, tomographic algorithms that attempt to solve them resort to making various assumptions, and as these assumptions do not usually hold in practice, the information provided by the algorithms might be inaccurate. We argue, therefore, for tomographic algorithms that work under weak, realistic assumptions. We first propose an algorithm that infers the loss rates of network links from end-to-end measurements. Inspired by previous work, we design an algorithm that gains initial information about the network by computing the variances of links' loss rates and by using these variances as an indication of the congestion level of links, i.e., the more congested the link, the higher the variance of its loss rate. Its novelty lies in the way it uses this information – to identify and characterize the maximum set of links whose loss rates can be accurately inferred from end-to-end measurements. We show that our algorithm performs significantly better than the existing alternatives, and that this advantage increases with the number of congested links in the network. Furthermore, we validate its performance by using an "Internet tomographer" that runs on a real testbed. Second, we show that it is feasible to perform network loss tomography in the presence of "link correlations," i.e., when the losses that occur on one link might depend on the losses that occur on other links in the network. More precisely, we formally derive the necessary and sufficient condition under which the probability that each set of links is congested is statistically identifiable from end-to-end measurements even in the presence of link correlations. In doing so, we challenge one of the popular assumptions in network loss tomography, specifically, the assumption that all links are independent. The model we propose assumes we know which links are most likely to be correlated, but it does not assume any knowledge about the nature or the degree of their correlations. In practice, we consider that all links in the same local area network or the same administrative domain are potentially correlated, because they could be sharing physical links, network equipment, or even management processes. Finally, we design a practical algorithm that solves "Congestion Probability Inference" even in the presence of link correlations, i.e., it infers the probability that each set of links is congested even when the losses that occur on one link might depend on the losses that occur on other links in the network. We model Congestion Probability Inference as a system of linear equations where each equation corresponds to a set of paths. Because it is infeasible to consider an equation for each set of paths in the network, our algorithm finds the maximum number of linearly independent equations by selecting particular sets of paths based on our theoretical results. On the one hand, the information provided by our algorithm is less than that provided by the existing alternatives that infer either the loss rates or the congestion statuses of links, i.e., we only learn how often each set of links is congested, as opposed to how many packets were lost at each link, or to which particular links were congested when. On the other hand, this information is more useful in practice because our algorithm works under assumptions weaker than those required by the existing alternatives, and we experimentally show that it is accurate under challenging network conditions such as non-stationary network dynamics and sparse topologies

    A Framework for Preserving Privacy and Cybersecurity in Brain-Computer Interfacing Applications

    Full text link
    Brain-Computer Interfaces (BCIs) comprise a rapidly evolving field of technology with the potential of far-reaching impact in domains ranging from medical over industrial to artistic, gaming, and military. Today, these emerging BCI applications are typically still at early technology readiness levels, but because BCIs create novel, technical communication channels for the human brain, they have raised privacy and security concerns. To mitigate such risks, a large body of countermeasures has been proposed in the literature, but a general framework is lacking which would describe how privacy and security of BCI applications can be protected by design, i.e., already as an integral part of the early BCI design process, in a systematic manner, and allowing suitable depth of analysis for different contexts such as commercial BCI product development vs. academic research and lab prototypes. Here we propose the adoption of recent systems-engineering methodologies for privacy threat modeling, risk assessment, and privacy engineering to the BCI field. These methodologies address privacy and security concerns in a more systematic and holistic way than previous approaches, and provide reusable patterns on how to move from principles to actions. We apply these methodologies to BCI and data flows and derive a generic, extensible, and actionable framework for brain-privacy-preserving cybersecurity in BCI applications. This framework is designed for flexible application to the wide range of current and future BCI applications. We also propose a range of novel privacy-by-design features for BCIs, with an emphasis on features promoting BCI transparency as a prerequisite for informational self-determination of BCI users, as well as design features for ensuring BCI user autonomy. We anticipate that our framework will contribute to the development of privacy-respecting, trustworthy BCI technologies

    Cognitive radar network design and applications

    Get PDF
    PhD ThesisIn recent years, several emerging technologies in modern radar system design are attracting the attention of radar researchers and practitioners alike, noteworthy among which are multiple-input multiple-output (MIMO), ultra wideband (UWB) and joint communication-radar technologies. This thesis, in particular focuses upon a cognitive approach to design these modern radars. In the existing literature, these technologies have been implemented on a traditional platform in which the transmitter and receiver subsystems are discrete and do not exchange vital radar scene information. Although such radar architectures benefit from these mentioned technological advances, their performance remains sub-optimal due to the lack of exchange of dynamic radar scene information between the subsystems. Consequently, such systems are not capable to adapt their operational parameters “on the fly”, which is in accordance with the dynamic radar environment. This thesis explores the research gap of evaluating cognitive mechanisms, which could enable modern radars to adapt their operational parameters like waveform, power and spectrum by continually learning about the radar scene through constant interactions with the environment and exchanging this information between the radar transmitter and receiver. The cognitive feedback between the receiver and transmitter subsystems is the facilitator of intelligence for this type of architecture. In this thesis, the cognitive architecture is fused together with modern radar systems like MIMO, UWB and joint communication-radar designs to achieve significant performance improvement in terms of target parameter extraction. Specifically, in the context of MIMO radar, a novel cognitive waveform optimization approach has been developed which facilitates enhanced target signature extraction. In terms of UWB radar system design, a novel cognitive illumination and target tracking algorithm for target parameter extraction in indoor scenarios has been developed. A cognitive system architecture and waveform design algorithm has been proposed for joint communication-radar systems. This thesis also explores the development of cognitive dynamic systems that allows the fusion of cognitive radar and cognitive radio paradigms for optimal resources allocation in wireless networks. In summary, the thesis provides a theoretical framework for implementing cognitive mechanisms in modern radar system design. Through such a novel approach, intelligent illumination strategies could be devised, which enable the adaptation of radar operational modes in accordance with the target scene variations in real time. This leads to the development of radar systems which are better aware of their surroundings and are able to quickly adapt to the target scene variations in real time.Newcastle University, Newcastle upon Tyne: University of Greenwich

    Physical layer security in co-operative MIMO networks - key generation and reliability evaluation

    Get PDF
    Doctor of PhilosophyDepartment of Electrical and Computer EngineeringBalasubramaniam NatarajanWidely recognized security vulnerabilities in current wireless radio access technologies undermine the benefits of ubiquitous mobile connectivity. Security strategies typically rely on bit-level cryptographic techniques and associated protocols at various levels of the data processing stack. These solutions have drawbacks that have slowed down the progress of new wireless services. Physical layer security approaches derived from an information theoretic framework have been recently proposed with secret key generation being the primary focus of this dissertation. Previous studies of physical layer secret key generation (PHY-SKG) indicate that a low secret key generation rate (SKGR) is the primary limitation of this approach. To overcome this drawback, we propose novel SKG schemes to increase the SKGR as well as improve the security strength of generated secret keys by exploiting multiple input and multiple output (MIMO), cooperative MIMO (co-op MIMO) networks. Both theoretical and numerical results indicate that relay-based co-op MIMO schemes, traditionally used to enhance LTE-A network throughput and coverage, can also increase SKGR. Based on the proposed SKG schemes, we introduce innovative power allocation strategies to further enhance SKGR. Results indicate that the proposed power allocation scheme can offer 15% to 30% increase in SKGR relative to MIMO/co-op MIMO networks with equal power allocation at low-power region, thereby improving network security. Although co-op MIMO architecture can offer significant improvements in both performance and security, the concept of joint transmission and reception with relay nodes introduce new vulnerabilities. For example, even if the transmitted information is secured, it is difficult but essential to monitor the behavior of relay nodes. Selfish or malicious intentions of relay nodes may manifest as non-cooperation. Therefore, we propose relay node reliability evaluation schemes to measure and monitor the misbehavior of relay nodes. Using a power-sensing based reliability evaluation scheme, we attempt to detect selfish nodes thereby measuring the level of non-cooperation. An overall node reliability evaluation, which can be used as a guide for mobile users interested in collaboration with relay nodes, is performed at the basestation. For malicious behavior, we propose a network tomography technique to arrive at node reliability metrics. We estimate the delay distribution of each internal link within a co-op MIMO framework and use this estimate as an indicator of reliability. The effectiveness of the proposed node reliability evaluations are demonstrated via both theoretical analysis and simulations results. The proposed PHY-SKG strategies used in conjunction with node reliability evaluation schemes represent a novel cross-layer approach to enhance security of cooperative networks

    A Control Systems Perspective to Condition Monitoring and Fault Diagnosis

    Get PDF
    Modern industrial processors, engineering systems and structures, have grown significantly in complexity and in scale during the recent years. Therefore, there is an increase in the demand for automatic processors, to avoid faults and severe break downs, through predictive maintenance. In this context, the research into nonlinear systems analysis has attained much interest in recent years as linear models cannot be used to represent some of these systems. In the field of control systems, the analysis of such systems is conducted in the frequency domain using methods of Frequency Response Analysis. Generalised Frequency Response Functions (GFRFs) and the Nonlinear Output Frequency Response Functions (NOFRFs) are Frequency Response Analysis techniques used for the analysis of nonlinear dynamical behaviour in the frequency domain. The problem of Condition Monitoring and Fault Diagnosis has been investigated in the perspective of modelling, signal processing and multivariate statistical analysis, data-driven methods such as neural networks have gained significant popularity. This is because possible faulty conditions related to complex systems are often difficult to interpret. In such a background, recently, a new data-driven approach based on a systems perspective has been proposed. This approach uses a controls systems analysis method of System Identification and Frequency Response Analysis and has been shown before as a potential technique. However, this approach has certain practical concerns regarding real-world applications. Motivated by these concerns in this thesis, the following contributions are put forward: 1. The method of evaluating NOFRFs, using input-output data of a nonlinear system may experience numerical errors. This is a major concern, hence the development of a method to overcome these numerical issues effectively. 2. Frequency Response Analysis cannot be used in its current state for nonlinear systems that exhibit severe nonlinear behaviour. Although theoretically, it has been argued that this is possible, even though, it has been impossible in a practical point of view. Therefore, the possibility and the manner in which Frequency Response Analysis can be conducted for these types of systems is presented. 3. Development of a System Identification methodology to overcome the issues of inadequately exciting inputs and appropriately capturing system dynamics under general circumstances of Condition Monitoring and Fault Diagnosis. In addition to the above, the novel implementation of a control systems analysis approach is implemented in characterising corrosion, crack depth and crack length on metal samples. The approach is applied to the data collected, using a newly proposed non-invasive Structural Health Monitoring method called RFID (Radio Frequency IDentification) wireless eddy current probing. The control systems analysis approach along with the RFID wireless eddy current probing method shows the clear potential of being a new technology in non-invasive Structural Health Monitoring systems
    • …
    corecore