71,353 research outputs found

    USING PROBABILISTIC GRAPHICAL MODELS TO DRAW INFERENCES IN SENSOR NETWORKS WITH TRACKING APPLICATIONS

    Get PDF
    Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    Robust Environmental Mapping by Mobile Sensor Networks

    Full text link
    Constructing a spatial map of environmental parameters is a crucial step to preventing hazardous chemical leakages, forest fires, or while estimating a spatially distributed physical quantities such as terrain elevation. Although prior methods can do such mapping tasks efficiently via dispatching a group of autonomous agents, they are unable to ensure satisfactory convergence to the underlying ground truth distribution in a decentralized manner when any of the agents fail. Since the types of agents utilized to perform such mapping are typically inexpensive and prone to failure, this results in poor overall mapping performance in real-world applications, which can in certain cases endanger human safety. This paper presents a Bayesian approach for robust spatial mapping of environmental parameters by deploying a group of mobile robots capable of ad-hoc communication equipped with short-range sensors in the presence of hardware failures. Our approach first utilizes a variant of the Voronoi diagram to partition the region to be mapped into disjoint regions that are each associated with at least one robot. These robots are then deployed in a decentralized manner to maximize the likelihood that at least one robot detects every target in their associated region despite a non-zero probability of failure. A suite of simulation results is presented to demonstrate the effectiveness and robustness of the proposed method when compared to existing techniques.Comment: accepted to icra 201

    Unlocking the deployment of spectrum sharing with a policy enforcement framework

    Get PDF
    Spectrum sharing has been proposed as a promising way to increase the efficiency of spectrum usage by allowing incumbent operators (IOs) to share their allocated radio resources with licensee operators (LOs), under a set of agreed rules. The goal is to maximize a common utility, such as the sum rate throughput, while maintaining the level of service required by the IOs. However, this is only guaranteed under the assumption that all “players”respect the agreed sharing rules. In this paper, we propose a comprehensive framework for licensed shared access (LSA) networks that discourages LO misbehavior. Our framework is built around three core functions: misbehavior detection via the employment of a dedicated sensing network; a penalization function; and, a behavior-driven resource allocation. To the best of our knowledge, this is the first time that these components are combined for the monitoring/policing of the spectrum under the LSA framework. Moreover, a novel simulator for LSA is provided as an open access tool, serving the purpose of testing and validating our proposed techniques via a set of extensive system-level simulations in the context of mobile network operators, where IOs and several competing LOs are considered. The results demonstrate that violation of the agreed sharing rules can lead to a great loss of resources for the misbehaving LOs, the amount of which is controlled by the system. Finally, we promote that including a policy enforcement function as part of the spectrum sharing system can be beneficial for the LSA system, since it can guarantee compliance with the spectrum sharing rules and limit the short-term benefits arising from misbehavior

    Towards delay-aware container-based Service Function Chaining in Fog Computing

    Get PDF
    Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism

    On the Optimality of Virtualized Security Function Placement in Multi-Tenant Data Centers

    Get PDF
    Security and service protection against cyber attacks remain among the primary challenges for virtualized, multi-tenant Data Centres (DCs), for reasons that vary from lack of resource isolation to the monolithic nature of legacy middleboxes. Although security is currently considered a property of the underlying infrastructure, diverse services require protection against different threats and at timescales which are on par with those of service deployment and elastic resource provisioning. We address the resource allocation problem of deploying customised security services over a virtualized, multi-tenant DC. We formulate the problem in Integral Linear Programming (ILP) as an instance of the NP-hard variable size variable cost bin packing problem with the objective of maximising the residual resources after allocation. We propose a modified version of the Best Fit Decreasing algorithm (BFD) to solve the problem in polynomial time and we show that BFD optimises the objective function up to 80% more than other algorithms
    • …
    corecore