371 research outputs found

    A Tabu Search WSN Deployment Method for Monitoring Geographically Irregular Distributed Events

    Get PDF
    In this paper, we address the Wireless Sensor Network (WSN) deployment issue. We assume that the observed area is characterized by the geographical irregularity of the sensed events. Formally, we consider that each point in the deployment area is associated a differentiated detection probability threshold, which must be satisfied by our deployment method. Our resulting WSN deployment problem is formulated as a Multi-Objectives Optimization problem, which seeks to reduce the gap between the generated events detection probabilities and the required thresholds while minimizing the number of deployed sensors. To overcome the computational complexity of an exact resolution, we propose an original pseudo-random approach based on the Tabu Search heuristic. Simulations show that our proposal achieves better performances than several other approaches proposed in the literature. In the last part of this paper, we generalize the deployment problem by including the wireless communication network connectivity constraint. Thus, we extend our proposal to ensure that the resulting WSN topology is connected even if a sensor communication range takes small values

    The Deployment in the Wireless Sensor Networks: Methodologies, Recent Works and Applications

    Get PDF
    International audienceThe wireless sensor networks (WSN) is a research area in continuous evolution with a variety of application contexts. Wireless sensor networks pose many optimization problems, particularly because sensors have limited capacity in terms of energy, processing and memory. The deployment of sensor nodes is a critical phase that significantly affects the functioning and performance of the network. Often, the sensors constituting the network cannot be accurately positioned, and are scattered erratically. To compensate the randomness character of their placement, a large number of sensors is typically deployed, which also helps to increase the fault tolerance of the network. In this paper, we are interested in studying the positioning and placement of sensor nodes in a WSN. First, we introduce the problem of deployment and then we present the latest research works about the different proposed methods to solve this problem. Finally, we mention some similar issues related to the deployment and some of its interesting applications

    An enhanced evolutionary algorithm for requested coverage in wireless sensor networks

    Get PDF
    Wireless sensor nodes with specific and new sensing capabilities and application requirements have affected the behaviour of wireless sensor networks and created problems. Placement of the nodes in an application area is a wellknown problem in the field. In addition, high per-node cost as well as need to produce a requested coverage and guaranteed connectivity features is a must in some applications. Conventional deployments and methods of modelling the behaviour of coverage and connectivity cannot satisfy the application needs and increase the network lifetime. Thus, the research designed and developed an effective node deployment evaluation parameter, produced a more efficient node deployment algorithm to reduce cost, and proposed an evolutionary algorithm to increase network lifetime while optimising deployment cost in relation to the requested coverage scheme. This research presents Accumulative Path Reception Rate (APRR) as a new method to evaluate node connectivity in a network. APRR, a node deployment evaluation parameter was used as the quality of routing path from a sensing node to sink node to evaluate the quality of a network deployment strategy. Simulation results showed that the behaviour of the network is close to the prediction of the APRR. Besides that, a discrete imperialist competitive algorithm, an extension of the Imperialist Competitive Algorithm (ICA) evolutionary algorithm was used to produce a network deployment plan according to the requested event detection probability with a more efficient APRR. It was used to reduce deployment cost in comparison to the use of Multi-Objective Evolutionary Algorithm (MOEA) and Multi-Objective Deployment Algorithm (MODA) algorithms. Finally, a Repulsion Force and Bottleneck Handling (RFBH) evolutionary-based algorithm was proposed to prepare a higher APRR and increase network lifetime as well as reduce deployment cost. Experimental results from simulations showed that the lifetime and communication quality of the output network strategies have proven the accuracy of the RFBH algorithm performance

    Connectivity, Coverage and Placement in Wireless Sensor Networks

    Get PDF
    Wireless communication between sensors allows the formation of flexible sensor networks, which can be deployed rapidly over wide or inaccessible areas. However, the need to gather data from all sensors in the network imposes constraints on the distances between sensors. This survey describes the state of the art in techniques for determining the minimum density and optimal locations of relay nodes and ordinary sensors to ensure connectivity, subject to various degrees of uncertainty in the locations of the nodes

    Softwarization of Large-Scale IoT-based Disasters Management Systems

    Get PDF
    The Internet of Things (IoT) enables objects to interact and cooperate with each other for reaching common objectives. It is very useful in large-scale disaster management systems where humans are likely to fail when they attempt to perform search and rescue operations in high-risk sites. IoT can indeed play a critical role in all phases of large-scale disasters (i.e. preparedness, relief, and recovery). Network softwarization aims at designing, architecting, deploying, and managing network components primarily based on software programmability properties. It relies on key technologies, such as cloud computing, Network Functions Virtualization (NFV), and Software Defined Networking (SDN). The key benefits are agility and cost efficiency. This thesis proposes softwarization approaches to tackle the key challenges related to large-scale IoT based disaster management systems. A first challenge faced by large-scale IoT disaster management systems is the dynamic formation of an optimal coalition of IoT devices for the tasks at hand. Meeting this challenge is critical for cost efficiency. A second challenge is an interoperability. IoT environments remain highly heterogeneous. However, the IoT devices need to interact. Yet another challenge is Quality of Service (QoS). Disaster management applications are known to be very QoS sensitive, especially when it comes to delay. To tackle the first challenge, we propose a cloud-based architecture that enables the formation of efficient coalitions of IoT devices for search and rescue tasks. The proposed architecture enables the publication and discovery of IoT devices belonging to different cloud providers. It also comes with a coalition formation algorithm. For the second challenge, we propose an NFV and SDN based - architecture for on-the-fly IoT gateway provisioning. The gateway functions are provisioned as Virtual Network Functions (VNFs) that are chained on-the-fly in the IoT domain using SDN. When it comes to the third challenge, we rely on fog computing to meet the QoS and propose algorithms that provision IoT applications components in hybrid NFV based - cloud/fogs. Both stationary and mobile fog nodes are considered. In the case of mobile fog nodes, a Tabu Search-based heuristic is proposed. It finds a near-optimal solution and we numerically show that it is faster than the Integer Linear Programming (ILP) solution by several orders of magnitude

    EVA: Emergency Vehicle Allocation

    Get PDF
    Emergency medicine plays a critical role in the development of a community, where the goal is to provide medical assistance in the shortest possible time. Consequently, the systems that support emergency operations need to be robust, efficient, and effective when managing the limited resources at their disposal. To achieve this, operators analyse historical data in search of patterns present in past occurrencesthat could help predict future call volume. This is a time consuming and very complex task that could be solved by the usage of machine learning solutions, which have been performed appropriately in the context of time series forecasting. Only after the future demands are known, the optimization of the distribution of available assets can be done, for the purpose of supporting high-density zones. The current works aim to propose an integrated system capable of supporting decision-making emergency operations in a real-time environment by allocating a set of available units within a service area based on hourly call volume predictions. The suggested system architecture employs a microservices approach along with event-based communications to enable real-time interactions between every component. This dissertation focuses on call volume forecasting and optimizing allocation components. A combination of traditional time series and deep learning models was used to model historical data from Virginal Beach emergency calls between the years 2010 and 2018, combined with several other features such as weather-related information. Deep learning solutions offered better error metrics, with WaveNet having an MAE value of 0.04. Regarding optimizing emergency vehicle location, the proposed solution is based on a Linear Programming problem to minimize the number of vehicles in each station, with a neighbour mechanism, entitled EVALP-NM, to add a buffer to stations near a high-density zone. This solution was also compared against a Genetic Algorithm that performed significantly worse in terms of execution time and outcomes. The performance of EVALP-NM was tested against simulations with different settings like the number of zones, stations, and ambulances.A medicina de emergência desempenha um papel fundamental no desenvolvimento da Sociedade, onde o objetivo é prestar assistência médica no menor tempo possível. Consequentemente, os sistemas que apoiam as operações de emergência precisam de ser robustos, eficientes e eficazes na gestão dos recursos limitados. Para isso, são analisados dados históricos no intuito de encontrar padrões em ocorrências passadas que possam ajudar a prever o volume futuro de chamadas. Esta é uma tarefa demorada e muito complexa que poderia ser resolvida com o uso de soluções de Machine Learning, que têm funcionado adequadamente no contexto da previsão de séries temporais. Só depois de conhecida a demanda futura poderá ser feita a otimização da distribuição dos recursos disponíveis, com o objetivo de suportar zonas de elevada densidade populacional. O presente trabalho tem como objetivo propor um sistema integrado capaz de apoiar a tomada de decisão em operações de emergência num ambiente de tempo real, atribuindo um conjunto de unidades disponíveis dentro de uma área de serviço com base em previsões volume de chamadas a cada hora. A arquitetura de sistema sugerida emprega uma abordagem de microserviços juntamente com comunicações baseadas em eventos para permitir interações em tempo real entre os componentes. Esta dissertação centra se nos componentes de previsão do volume de chamadas e otimização da atribuição. Foram usados modelos de séries temporais tradicionais e Deep Learning para modelar dados históricos de chamadas de emergência de Virginal Beach entre os anos de 2010 e 2018, combinadas com informações relacionadas ao clima. As soluções de Deep Learning ofereceram melhores métricas de erro, com WaveNet a ter um valor MAE de 0,04. No que diz respeito à otimização da localização dos veículos de emergência, a solução proposta baseia-se num problema de Programação Linear para minimizar o número de veículos em cada estação, com um mecanismo de vizinho, denominado EVALP-NM, para adicionar unidades adicionais às estações próximas de uma zona de alta densidade de chamadas. Esta solução foi comparada com um algoritmo genético que teve um desempenho significativamente pior em termos de tempo de execução e resultados. O desempenho do EVALP-NM foi testado em simulações com configurações diferentes, como número de zonas, estações e ambulâncias

    Effective Maintenance by Reducing Failure-Cause Misdiagnosis in Semiconductor Industry (SI)

    Get PDF
    Increasing demand diversity and volume in semiconductor industry (SI) have resulted in shorter product life cycles. This competitive environment, with high-mix low-volume production, requires sustainable production capacities that can be achieved by reducing unscheduled equipment breakdowns. The fault detection and classification (FDC) is a well-known approach, used in the SI, to improve and stabilize the production capacities. This approach models equipment as a single unit and uses sensors data to identify equipment failures against product and process drifts. Besides its successful deployment for years, recent increase in unscheduled equipment breakdown needs an improved methodology to ensure sustainable capacities. The analysis on equipment utilization, using data collected from a world reputed semiconductor manufacturer, shows that failure durations as well as number of repair actions in each failure have significantly increased. This is an evidence of misdiagnosis in the identification of failures and prediction of its likely causes. In this paper, we propose two lines of defense against unstable and reducing production capacities. First, equipment should be stopped only if it is suspected as a source for product and process drifts whereas second defense line focuses on more accurate identification of failures and detection of associated causes. The objective is to facilitate maintenance engineers for more accurate decisions about failures and repair actions, upon an equipment stoppage. In the proposed methodology, these two lines of defense are modeled as Bayesian network (BN) with unsupervised learning of structure using data collected from the variables (classified as symptoms) across production, process, equipment and maintenance databases. The proofs of concept demonstrate that contextual or statistical information other than FDC sensor signals, used as symptoms, provide reliable information (posterior probabilities) to find the source of product/process quality drifts, a.k.a. failure modes (FM), as well as potential failure and causes. The reliability and learning curves concludes that modeling equipment at module level than equipment offers 45% more accurate diagnosis. The said approach contributes in reducing not only the failure durations but also the number of repair actions that has resulted in recent increase in unstable production capacities and unscheduled equipment breakdowns

    Bee Colony Optimization - part II: The application survey

    Get PDF
    Bee Colony Optimization (BCO) is a meta-heuristic method based on foraging habits of honeybees. This technique was motivated by the analogy found between the natural behavior of bees searching for food and the behavior of optimization algorithms searching for an optimum in combinatorial optimization problems. BCO has been successfully applied to various hard combinatorial optimization problems, mostly in transportation, location and scheduling fields. There are some applications in the continuous optimization field that have appeared recently. The main purpose of this paper is to introduce the scientific community more closely with BCO by summarizing its existing successful applications. [Projekat Ministarstva nauke Republike Srbije, br. OI174010, OI174033, TR36002] Document type: Articl

    Computational Intelligence Algorithms for Optimisation of Wireless Sensor Networks

    Get PDF
    Recent studies have tended towards incorporating Computation Intelligence, which is a large umbrella for all Machine Learning and Metaheuristic approaches into wireless sensor network (WSN) applications for enhanced and intuitive performance. Meta-heuristic optimisation techniques are used for solving several WSN issues such as energy minimisation, coverage, routing, scheduling and so on. This research designs and develops highly intelligent WSNs that can provide the core requirement of energy efficiency and reliability. To meet these requirements, two major decisions were carried out at the sink node or base station. The first decision involves the use of supervised and unsupervised machine learning algorithms to achieve an accurate decision at the sink node. This thesis presents a new hybrid approach for event (fire) detection system using k-means clustering on aggregated fire data to form two class labels (fire and non-fire). The resulting data outputs are trained and tested by the Feed Forward Neural Network, Naive Bayes, and Decision Trees classifier. This hybrid approach was found to significantly improve fire detection performance against the use of only the classifiers. The second decision employs a metaheuristic approach to optimise the solution of WSNs clustering problem. Two metaheuristic-based protocols namely the Dynamic Local Search Algorithm for Clustering Hierarchy (DLSACH) and Heuristics Algorithm for Clustering Hierarchy (HACH) are proposed to achieve an evenly balanced energy and minimise the net residual energy of each sensor nodes. This thesis proved that the two protocols outperforms state-of-the-art protocols such as LEACH, TCAC and SEECH in terms of network lifetime and maintains a favourable performance even under different energy heterogeneity settings

    Towards the Softwarization of Content Delivery Networks for Component and Service Provisioning

    Get PDF
    Content Delivery Networks (CDNs) are common systems nowadays to deliver content (e.g. Web pages, videos) to geographically distributed end-users over the Internet. Leveraging geographically distributed replica servers, CDNs can easily help to meet the required Quality of Service (QoS) in terms of content quality and delivery time. Recently, the dominating surge in demand for rich and premium content has encouraged CDN providers to provision value-added services (VAS) in addition to the basic services. While video streaming is an example of basic CDN services, VASs cover more advanced services such as media management. Network softwarization relies on programmability properties to facilitate the deployment and management of network functionalities. It brings about several benefits such as scalability, adaptability, and flexibility in the provisioning of network components and services. Technologies, such as Network Functions Virtualization (NFV) and Software Defined Networking (SDN) are its key enablers. There are several challenges related to the component and service provisioning in CDNs. On the architectural front, a first challenge is the extension of the CDN coverage by on-the-fly deployment of components in new locations and another challenge is the upgrade of CDN components in a timely manner, because traditionally, they are deployed statically as physical building blocks. Yet, another architectural challenge is the dynamic composition of required middle-boxes for CDN VAS provisioning, because existing SDN frameworks lack features to support the dynamic chaining of the application-level middle-boxes that are essential building blocks of CDN VASs. On the algorithmic front, a challenge is the optimal placement of CDN VAS middle-boxes in a dynamic manner as CDN VASs have an unknown end-point prior to placement. This thesis relies on network softwarization to address key architectural and algorithmic challenges related to component and service provisioning in CDNs. To tackle the first challenge, we propose an architecture based on NFV and microservices for an on-the-fly CDN component provisioning including deployment and upgrading. In order to address the second challenge, we propose an architecture for on-the-fly provisioning of VASs in CDNs using NFV and SDN technologies. The proposed architecture reduces the content delivery time by introducing features for in-network caching. For the algorithmic challenge, we study and model the problem of dynamic placement and chaining of middle-boxes (implemented as Virtual Network Function (VNF)) for CDN VASs as an Integer Linear Programming (ILP) problem with the objective of minimizing the cost while respecting the QoS. To increase the problem tractability, we propose and validate some heuristics
    corecore