2,881 research outputs found

    A Web-GIS Online Vector Data Editing Method Based on Multi-scale Representation Data Structure

    Get PDF
    From analysis of the factors affecting the service quality of Web GIS, the key factor in the efficiency of online vector data interactive editing is to control the size of the relevant data set. This paper puts forward an online vector data editing method based on the multi-scale representation data structure DBLG-tree. It designs WindowingQuery for large geometric features. Combined with SimplifyQuery, it can effectively control the vertex scale of the geometric features. At the same time, the sub-feature resulting from WindowingQuery retains the connection with the original feature and can be merged and updated. It provides DownsizeQuery which can directly control the volume of query result from the data set, so as to adapt to the performance of different browsers and mobile devices. Research shows that the execution time of DownsizeQuery under the support of multi-scale representation data structure is stable, and the size of the queried and updated data set is also stable. The upper bounded query time and query results ensure the response time of online interactive visualization and updating of the vector features

    Prediction-based techniques for the optimization of mobile networks

    Get PDF
    Mención Internacional en el título de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition of several random phenomena, most of which, related to human activities, such as mobility, communications and network usage. However, when observed in their totality, the many individual components merge into more deterministic patterns and trends start to be identifiable and predictable. In this thesis we analyze a recent branch of network optimization that is commonly referred to as anticipatory networking and that entails the combination of prediction solutions and network optimization schemes. The main intuition behind anticipatory networking is that knowing in advance what is going on in the network can help understanding potentially severe problems and mitigate their impact by applying solution when they are still in their initial states. Conversely, network forecast might also indicate a future improvement in the overall network condition (i.e. load reduction or better signal quality reported from users). In such a case, resources can be assigned more sparingly requiring users to rely on buffered information while waiting for the better condition when it will be more convenient to grant more resources. In the beginning of this thesis we will survey the current anticipatory networking panorama and the many prediction and optimization solutions proposed so far. In the main body of the work, we will propose our novel solutions to the problem, the tools and methodologies we designed to evaluate them and to perform a real world evaluation of our schemes. By the end of this work it will be clear that not only is anticipatory networking a very promising theoretical framework, but also that it is feasible and it can deliver substantial benefit to current and next generation mobile networks. In fact, with both our theoretical and practical results we show evidences that more than one third of the resources can be saved and even larger gain can be achieved for data rate enhancements.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge Ortín Gracia.- Vocal: Guevara Noubi

    EDMON - Electronic Disease Surveillance and Monitoring Network: A Personalized Health Model-based Digital Infectious Disease Detection Mechanism using Self-Recorded Data from People with Type 1 Diabetes

    Get PDF
    Through time, we as a society have been tested with infectious disease outbreaks of different magnitude, which often pose major public health challenges. To mitigate the challenges, research endeavors have been focused on early detection mechanisms through identifying potential data sources, mode of data collection and transmission, case and outbreak detection methods. Driven by the ubiquitous nature of smartphones and wearables, the current endeavor is targeted towards individualizing the surveillance effort through a personalized health model, where the case detection is realized by exploiting self-collected physiological data from wearables and smartphones. This dissertation aims to demonstrate the concept of a personalized health model as a case detector for outbreak detection by utilizing self-recorded data from people with type 1 diabetes. The results have shown that infection onset triggers substantial deviations, i.e. prolonged hyperglycemia regardless of higher insulin injections and fewer carbohydrate consumptions. Per the findings, key parameters such as blood glucose level, insulin, carbohydrate, and insulin-to-carbohydrate ratio are found to carry high discriminative power. A personalized health model devised based on a one-class classifier and unsupervised method using selected parameters achieved promising detection performance. Experimental results show the superior performance of the one-class classifier and, models such as one-class support vector machine, k-nearest neighbor and, k-means achieved better performance. Further, the result also revealed the effect of input parameters, data granularity, and sample sizes on model performances. The presented results have practical significance for understanding the effect of infection episodes amongst people with type 1 diabetes, and the potential of a personalized health model in outbreak detection settings. The added benefit of the personalized health model concept introduced in this dissertation lies in its usefulness beyond the surveillance purpose, i.e. to devise decision support tools and learning platforms for the patient to manage infection-induced crises

    Practical and Robust Power Management for Wireless Sensor Networks

    Get PDF
    Wireless Sensor Networks: WSNs) consist of tens or hundreds of small, inexpensive computers equipped with sensors and wireless communication capabilities. Because WSNs can be deployed without fixed infrastructure, they promise to enable sensing applications in environments where installing such infrastructure is not feasible. However, the lack of fixed infrastructure also presents a key challenge for application developers: sensor nodes must often operate for months or years at a time from fixed or limited energy sources. The focus of this dissertation is on reusable power management techniques designed to facilitate sensor network developers in achieving their systems\u27 required lifetimes. Broadly speaking, power management techniques fall into two categories. Many power management protocols developed within the WSN community target specific hardware subsystems in isolation, such as sensor or radio hardware. The first part of this dissertation describes the Adaptive and Robust Topology control protocol: ART), a representative hardware-specific technique for conserving energy used by packet transmissions. In addition to these single-subsystem approaches, many applications can benefit greatly from holistic power management techniques that jointly consider the sensing, computation, and communication costs of potential application configurations. The second part of this dissertation extends this holistic power management approach to two families of structural health monitoring applications. By applying a partially-decentralized architecture, the cost of collecting vibration data for analysis at a centralized base station is greatly reduced. Finally, the last part of this dissertation discusses work toward a system for clinical early warning and intervention. The feasibility of this approach is demonstrated through preliminary study of an early warning component based on historical clinical data. An ongoing clinical trial of a real-time monitoring component also provides important guidelines for future clinical deployments based on WSNs

    Distributed cloud-edge analytics and machine learning for transportation emissions estimation

    Get PDF
    (English) In recent years IoT and Smart Cities have become a popular paradigm of computing that is based on network-enabled devices connected providing different functionalities, from sensor measures to domotic actions. With this paradigm, it is possible to provide to the stakeholders near-realtime information of the field, e.g. the current pollution of the city. Along with the mentioned paradigms, Fog Computing enables computation near the sensors where the data is produced, i.e. Edge nodes. This paradigm provides low latency and fault tolerance given the possible independence of the sensor devices. Moreover, pushing this computation enables derived results in a near-realtime fashion. This ability to push the computation to where the data is produced can be beneficial in many situations, however it also requires to include in the Edge the data preparation processes that ensure the fitness for use of the data as the incoming data can be erroneous. Given this situation, Machine Learning can be useful to correct data and also to produce predictions of the future values. Even though there have been studies regarding on the uses of data at the Edge, to our knowledge there is no evaluation of the different modeling situations and the viability of the approach. Therefore, this thesis aims to evaluate the possibility of building a distributed system that ensures the fitness for use of the incoming data through Machine Learning enabled Data Preparation, estimates the emissions and predicts the future status of the city in a near-realtime fashion. We evaluate the viability through three contributions. The first contribution focuses on forecasting in a distributed scenario with road traffic dataset for evaluation. It provides a robust solution to build a central model. This approach is based on Federated Learning, which allows training models at the Edge nodes and then merging them centrally. This way the models in the Edge can be independent but also can be synchronized. The results show the trade-off between accuracy versions training time and a comparison between low-powered devices versus server-class machines. These analyses show that it is viable to use Machine Learning with this paradigm. The second contribution focuses on a particular use case of ship emission estimation. To estimate exhaust emissions data must be correct, which is not always the case. This contribution explores the different techniques available to correct ship registry data and proposes the usage of simple Machine Learning techniques to do imputation of missing or erroneous values. This contribution analyzes the different variables and their relationship to provide the practitioners with guidelines for correction and data treatment. The results show that with classical Machine Learning it is possible to improve the state-of-the-art results. Moreover, as these algorithms are simple enough, they can be used in an Edge device if required. The third contribution focuses on generating new variables from the ones available with a ship trace dataset obtained from the Automatic Identification System (AIS). We use a pipeline of two different methods, a Neural Networks and a clustering algorithm, to group movements into movement patterns or \emph{behaviors}. We test the predicting power of these behaviors to predict ship type, main engine power, and navigational status. The prediction of the main engine power is compared against the standard technique used in ship emission estimation when the ship registry is missing. Our approach was able to detect 45\% of the otherwise undetected emissions if the baseline method was to be used. As ship navigational status is prone to error, the behaviors found are proposed as an alternative variable based in robust data. These contributions build a framework that can distribute the learning processes and that resists network failures in low-powered devices.(Español) En los últimos años, IoT y las Smart Cities se han convertido en un paradigma popular de computación que se basa en dispositivos conectados a la red que proporcionan diferentes funcionalidades, desde medidas de sensores hasta acciones domóticas. Con este paradigma, es posible tener información en casi tiempo real, como por ejemplo la contaminación actual de la ciudad. Junto con los paradigmas mencionados, Fog Computing permite computar cerca de donde se producen los datos, es decir, los nodos Edge. Este paradigma proporciona baja latencia y tolerancia a fallos dada la posible independencia de los dispositivos sensores. Esta posibilidad puede ser beneficiosa en muchas situaciones, sin embargo, requiere incluir en el Edge los procesos de preparación de datos que aseguran la idoneidad para su uso, ya que los datos entrantes pueden ser erróneos. Ante esta situación, el Machine Learning es útil para corregir datos y también para producir predicciones de los valores futuros. A pesar de que se han realizado estudios sobre los usos de los datos en el Edge, hasta donde sabemos, no hay una evaluación de las diferentes situaciones de modelado y la viabilidad del enfoque. Por lo tanto, esta tesis tiene como objetivo evaluar la posibilidad de construir un sistema distribuido que garantice que los datos sean correctos a través de su preparación con Machine Learning. También el sistema deberá estimar las emisiones y predecir el estado futuro de la ciudad de una manera casi en tiempo real. La viabilidad se evalúa a través a través de tres contribuciones. La primera contribución se centra en escenario distribuido con un conjunto de datos de tráfico vial que proporciona una solución robusta para construir un modelo central. Este enfoque se basa en Federated Learning, que permite entrenar modelos en los nodos Edge y luego fusionarlos de forma centralizada. De esta manera, los modelos en el Edge pueden ser independientes, pero también se pueden sincronizar. Los resultados muestran la comparación de la precisión con un modelo central y uno distribuido y una comparación con dispositivos de bajo consumos contra servidores. Estos análisis muestran que es viable utilizar el Machine Learning en este paradigma. La segunda contribución se centra en un caso de uso particular de estimación de las emisiones de barcos. Para estimar las emisiones, los datos deben ser correctos, cosa que no siempre pasa. Esta contribución explora las diferentes técnicas disponibles para corregir los datos del registro de barcos y propone el uso de técnicas simples de Machine Learning para hacer imputación de valores faltantes o erróneos. Esta contribución analiza las diferentes variables y su relación para proporcionar a los profesionales pautas para la corrección y el tratamiento de datos. Los resultados muestran que con el Machine Learning clásico es posible mejorar los resultados frente a métodos del estado del arte. Además, como estos algoritmos son lo suficientemente simples como para poder utilizarse en dispositivos Edge. La tercera contribución se centra en generar nuevas variables a partir de las disponibles con un conjunto de datos de trazabilidad de barcos obtenido del Sistema AIS. Esto se hace utilizando en conjunto una red neuronal y un algoritmo de agrupación para agrupar los movimientos en patrones de movimiento o comportamientos. Se evalúa su funcionamiento para predecir el tipo de barco, la potencia del motor principal y el estado de navegación. Con esta predicción, nuestro sistema es capaz de detectar el 45% de las emisiones que no se detectan con métodos standard. Como el estado de navegación del barco es propenso a errores, los comportamientos encontrados se proponen como una variable alternativa basada en datos robustos. Estas contribuciones constituyen un marco para distribuir los procesos de aprendizaje y que resiste errores en la red con dispositivos de bajo consumo.Arquitectura de computador

    Data-Centric Epidemic Forecasting: A Survey

    Full text link
    The COVID-19 pandemic has brought forth the importance of epidemic forecasting for decision makers in multiple domains, ranging from public health to the economy as a whole. While forecasting epidemic progression is frequently conceptualized as being analogous to weather forecasting, however it has some key differences and remains a non-trivial task. The spread of diseases is subject to multiple confounding factors spanning human behavior, pathogen dynamics, weather and environmental conditions. Research interest has been fueled by the increased availability of rich data sources capturing previously unobservable facets and also due to initiatives from government public health and funding agencies. This has resulted, in particular, in a spate of work on 'data-centered' solutions which have shown potential in enhancing our forecasting capabilities by leveraging non-traditional data sources as well as recent innovations in AI and machine learning. This survey delves into various data-driven methodological and practical advancements and introduces a conceptual framework to navigate through them. First, we enumerate the large number of epidemiological datasets and novel data streams that are relevant to epidemic forecasting, capturing various factors like symptomatic online surveys, retail and commerce, mobility, genomics data and more. Next, we discuss methods and modeling paradigms focusing on the recent data-driven statistical and deep-learning based methods as well as on the novel class of hybrid models that combine domain knowledge of mechanistic models with the effectiveness and flexibility of statistical approaches. We also discuss experiences and challenges that arise in real-world deployment of these forecasting systems including decision-making informed by forecasts. Finally, we highlight some challenges and open problems found across the forecasting pipeline.Comment: 67 pages, 12 figure

    A surrogate-assisted measurement correction method for accurate and low-cost monitoring of particulate matter pollutants

    Get PDF
    Air pollution involves multiple health and economic challenges. Its accurate and low-cost monitoring is important for developing services dedicated to reduce the exposure of living beings to the pollution. Particulate matter (PM) measurement sensors belong to the key components that support operation of these systems. In this work, a modular, mobile Internet of Things sensor for PM measurements has been proposed. Due to a limited accuracy of the PM detector, the measurement data are refined using a two-stage procedure that involves elimination of the non-physical signal spikes followed by a non-linear correction of the responses using a multiplicative surrogate model. The correction layer is derived from the sparse and non-uniform calibration data, i.e., a combination of the measurements from the PM monitoring station and the sensor obtained in the same location over a specified (relatively short) interval. The device and the method have been both demonstrated based on the data obtained during three measurement campaigns. The proposed correction scheme improves the fidelity of PM measurements by around two orders of magnitude w.r.t. the responses for which the post-processing has not been considered. Performance of the proposed surrogate-assisted technique has been favorably compared against the benchmark approaches from the literature

    A Location-Aware Middleware Framework for Collaborative Visual Information Discovery and Retrieval

    Get PDF
    This work addresses the problem of scalable location-aware distributed indexing to enable the leveraging of collaborative effort for the construction and maintenance of world-scale visual maps and models which could support numerous activities including navigation, visual localization, persistent surveillance, structure from motion, and hazard or disaster detection. Current distributed approaches to mapping and modeling fail to incorporate global geospatial addressing and are limited in their functionality to customize search. Our solution is a peer-to-peer middleware framework based on XOR distance routing which employs a Hilbert Space curve addressing scheme in a novel distributed geographic index. This allows for a universal addressing scheme supporting publish and search in dynamic environments while ensuring global availability of the model and scalability with respect to geographic size and number of users. The framework is evaluated using large-scale network simulations and a search application that supports visual navigation in real-world experiments
    corecore