1,730 research outputs found

    Mathematical Models and Algorithms for Network Flow Problems Arising in Wireless Sensor Network Applications

    Get PDF
    We examine multiple variations on two classical network flow problems, the maximum flow and minimum-cost flow problems. These two problems are well-studied within the optimization community, and many models and algorithms have been presented for their solution. Due to the unique characteristics of the problems we consider, existing approaches cannot be directly applied. The problem variations we examine commonly arise in wireless sensor network (WSN) applications. A WSN consists of a set of sensors and collection sinks that gather and analyze environmental conditions. In addition to providing a taxonomy of relevant literature, we present mathematical programming models and algorithms for solving such problems. First, we consider a variation of the maximum flow problem having node-capacity restrictions. As an alternative to solving a single linear programming (LP) model, we present two alternative solution techniques. The first iteratively solves two smaller auxiliary LP models, and the second is a heuristic approach that avoids solving any LP. We also examine a variation of the maximum flow problem having semicontinuous restrictions that requires the flow, if positive, on any path to be greater than or equal to a minimum threshold. To avoid solving a mixed-integer programming (MIP) model, we present a branch-and-price algorithm that significantly improves the computational time required to solve the problem. Finally, we study two dynamic network flow problems that arise in wireless sensor networks under non-simultaneous flow assumptions. We first consider a dynamic maximum flow problem that requires an arc to transmit a minimum amount of flow each time it begins transmission. We present an MIP for solving this problem along with a heuristic algorithm for its solution. Additionally, we study a dynamic minimum-cost flow problem, in which an additional cost is incurred each time an arc begins transmission. In addition to an MIP, we present an exact algorithm that iteratively solves a relaxed version of the MIP until an optimal solution is found

    Shared Mobility Optimization in Large Scale Transportation Networks: Methodology and Applications

    Get PDF
    abstract: Optimization of on-demand transportation systems and ride-sharing services involves solving a class of complex vehicle routing problems with pickup and delivery with time windows (VRPPDTW). Previous research has made a number of important contributions to the challenging pickup and delivery problem along different formulation or solution approaches. However, there are a number of modeling and algorithmic challenges for a large-scale deployment of a vehicle routing and scheduling algorithm, especially for regional networks with various road capacity and traffic delay constraints on freeway bottlenecks and signal timing on urban streets. The main thrust of this research is constructing hyper-networks to implicitly impose complicated constraints of a vehicle routing problem (VRP) into the model within the network construction. This research introduces a new methodology based on hyper-networks to solve the very important vehicle routing problem for the case of generic ride-sharing problem. Then, the idea of hyper-networks is applied for (1) solving the pickup and delivery problem with synchronized transfers, (2) computing resource hyper-prisms for sustainable transportation planning in the field of time-geography, and (3) providing an integrated framework that fully captures the interactions between supply and demand dimensions of travel to model the implications of advanced technologies and mobility services on traveler behavior.Dissertation/ThesisDoctoral Dissertation Civil, Environmental and Sustainable Engineering 201

    Contextual Human Trajectory Forecasting within Indoor Environments and Its Applications

    Get PDF
    A human trajectory is the likely path a human subject would take to get to a destination. Human trajectory forecasting algorithms try to estimate or predict this path. Such algorithms have wide applications in robotics, computer vision and video surveillance. Understanding the human behavior can provide useful information towards the design of these algorithms. Human trajectory forecasting algorithm is an interesting problem because the outcome is influenced by many factors, of which we believe that the destination, geometry of the environment, and the humans in it play a significant role. In addressing this problem, we propose a model to estimate the occupancy behavior of humans based on the geometry and behavioral norms. We also develop a trajectory forecasting algorithm that understands this occupancy and leverages it for trajectory forecasting in previously unseen geometries. The algorithm can be useful in a variety of applications. In this work, we show its utility in three applications, namely person re-identification, camera placement optimization, and human tracking. Experiments were performed with real world data and compared to state-of-the-art methods to assess the quality of the forecasting algorithm and the enhancement in the quality of the applications. Results obtained suggests a significant enhancement in the accuracy of trajectory forecasting and the computer vision applications.Computer Science, Department o

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    Fog Computing

    Get PDF
    Everything that is not a computer, in the traditional sense, is being connected to the Internet. These devices are also referred to as the Internet of Things and they are pressuring the current network infrastructure. Not all devices are intensive data producers and part of them can be used beyond their original intent by sharing their computational resources. The combination of those two factors can be used either to perform insight over the data closer where is originated or extend into new services by making available computational resources, but not exclusively, at the edge of the network. Fog computing is a new computational paradigm that provides those devices a new form of cloud at a closer distance where IoT and other devices with connectivity capabilities can offload computation. In this dissertation, we have explored the fog computing paradigm, and also comparing with other paradigms, namely cloud, and edge computing. Then, we propose a novel architecture that can be used to form or be part of this new paradigm. The implementation was tested on two types of applications. The first application had the main objective of demonstrating the correctness of the implementation while the other application, had the goal of validating the characteristics of fog computing.Tudo o que não é um computador, no sentido tradicional, está sendo conectado à Internet. Esses dispositivos também são chamados de Internet das Coisas e estão pressionando a infraestrutura de rede atual. Nem todos os dispositivos são produtores intensivos de dados e parte deles pode ser usada além de sua intenção original, compartilhando seus recursos computacionais. A combinação desses dois fatores pode ser usada para realizar processamento dos dados mais próximos de onde são originados ou estender para a criação de novos serviços, disponibilizando recursos computacionais periféricos à rede. Fog computing é um novo paradigma computacional que fornece a esses dispositivos uma nova forma de nuvem a uma distância mais próxima, onde “Things” e outros dispositivos com recursos de conectividade possam delegar processamento. Nesta dissertação, exploramos fog computing e também comparamos com outros paradigmas, nomeadamente cloud e edge computing. Em seguida, propomos uma nova arquitetura que pode ser usada para formar ou fazer parte desse novo paradigma. A implementação foi testada em dois tipos de aplicativos. A primeira aplicação teve o objetivo principal de demonstrar a correção da implementação, enquanto a outra aplicação, teve como objetivo validar as características de fog computing

    Models and Solution Approaches for Efficient Design and Operation of Wireless Sensor Networks

    Get PDF
    Recent advancements in sensory devices are presenting various opportunities for widespread applications of wireless sensor networks (WSNs). The most distinguishing characteristic of a WSN is the fact that its sensors have nite and non-renewable energy resources. Many research e orts aim at developing energy e cient network topology and routing schemes for prolonging the network lifetime. However, we notice that, in the majority of the literature, topology control and routing problems are handled separately, thus overlooking the interrelationships among them. In this dissertation, we consider an integrated topology control and routing problem in WSNs which are unique type of data gathering networks characterized by limited energy resources at the sensor nodes distributed over the network. We suggest an underlying hierarchical topology and routing structure that aims to achieve the most prolonged network lifetime via e cient use of limited energy resources and addressing operational speci cities of WSNs such as communication-computation trade-o , data aggregation, and multi-hop data transfer for better energy e ciency. We develop and examine three di erent objectives and their associated mathematical models that de- ne alternative policies to be employed in each period of a deployment cycle for the purpose of maximizing the number of periods so that the network lifetime is prolonged. On the methodology side, we develop e ective solution approaches that are based on decomposition techniques, heuristics and parallel heuristic algorithms. Furthermore, we devise visualization tools to support our optimization e orts and demonstrate that visualization can be very helpful in solving larger and realistic problems with dynamic nature. This dissertation research provides novel analytical models and solution methodologies for important practical problems in WSNs. The solution algorithms developed herein will also contribute to the generalized mixed-discrete optimization problem, especially for the problems with similar characteristics

    Planning and Management of Cloud Computing Networks

    Get PDF
    Résumé L’évolution de l’internet a un effet important sur une grande partie de la population mondiale. On l’utilise pour communiquer, consulter de l’information, travailler et se divertir. Son utilité exceptionnelle a conduit à une explosion de la quantité d’applications et de ressources informatiques. Cependant, la croissance du réseau entraîne une importante consommation énergétique. Si la consommation énergétique des réseaux de télécommunications et des centres de données était celle d’un pays, il se classerait 5e pays du monde. Pis, le nombre de serveurs dans le monde devrait être multiplié par 10 entre 2013 et 2020. Ce contexte nous a motivé à étudier des techniques et des méthodes pour affecter les ressources d’une façon optimale par rapport aux coûts, à la qualité de service, à la consommation énergétique et `a l’impact écologique. Les résultats que nous avons obtenus minimisent les dépenses d’investissement (CAPEX) et les dépenses d’exploitation (OPEX), réduisent d’un facteur 6 le temps de réponse, diminuent la consommation énergétique de 30% et divisent les émissions de CO2 par un facteur 60. L’infonuagique permet l’accès dynamique aux ressources informatiques comme un service. Les programmes sont exécutés sur des serveurs connectés `a l’internet, et les usagers peuvent les utiliser depuis leurs ordinateurs et dispositifs mobiles. Le premier avantage de cette architecture est de réduire le temps de mise en place des applications et l’interopérabilité. En effet, un nouvel utilisateur n’a besoin que d’un navigateur web. Il n’est forcé ni d’installer de programmes sur son ordinateur, ni de posséder un système d’exploitation spécifique. Le deuxième avantage est la disponibilité des applications et de l’information de fa ̧con continue. Celles-ci peuvent être utilisées `a partir de n’importe quel endroit et de n’importe quel dis- positif connecté `a l’internet. De plus, les serveurs et les ressources informatiques peuvent être affectés aux applications de fa ̧con dynamique, selon la quantité d’utilisateurs et la charge de travail. C’est ce que l’on appelle l’élasticité des applications.---------- Abstract The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access

    Fundamentals

    Get PDF
    Volume 1 establishes the foundations of this new field. It goes through all the steps from data collection, their summary and clustering, to different aspects of resource-aware learning, i.e., hardware, memory, energy, and communication awareness. Machine learning methods are inspected with respect to resource requirements and how to enhance scalability on diverse computing architectures ranging from embedded systems to large computing clusters

    Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis

    Get PDF
    Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware
    • …
    corecore