18,271 research outputs found
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
Edge Computing for Internet of Things
The Internet-of-Things is becoming an established technology, with devices being deployed in homes, workplaces, and public areas at an increasingly rapid rate. IoT devices are the core technology of smart-homes, smart-cities, intelligent transport systems, and promise to optimise travel, reduce energy usage and improve quality of life. With the IoT prevalence, the problem of how to manage the vast volumes of data, wide variety and type of data generated, and erratic generation patterns is becoming increasingly clear and challenging. This Special Issue focuses on solving this problem through the use of edge computing. Edge computing offers a solution to managing IoT data through the processing of IoT data close to the location where the data is being generated. Edge computing allows computation to be performed locally, thus reducing the volume of data that needs to be transmitted to remote data centres and Cloud storage. It also allows decisions to be made locally without having to wait for Cloud servers to respond
Dynamic learning of the environment for eco-citizen behavior
Le développement de villes intelligentes et durables nécessite le déploiement des technologies de l'information et de la communication (ITC) pour garantir de meilleurs services et informations disponibles à tout moment et partout. Comme les dispositifs IoT devenant plus puissants et moins coûteux, la mise en place d'un réseau de capteurs dans un contexte urbain peut être coûteuse. Cette thèse propose une technique pour estimer les informations environnementales manquantes dans des environnements à large échelle. Notre technique permet de fournir des informations alors que les dispositifs ne sont pas disponibles dans une zone de l'environnement non couverte par des capteurs. La contribution de notre proposition est résumée dans les points suivants : - limiter le nombre de dispositifs de détection à déployer dans un environnement urbain ; - l'exploitation de données hétérogènes acquises par des dispositifs intermittents ; - le traitement en temps réel des informations ; - l'auto-calibration du système. Notre proposition utilise l'approche AMAS (Adaptive Multi-Agent System) pour résoudre le problème de l'indisponibilité des informations. Dans cette approche, une exception est considérée comme une situation non coopérative (NCS) qui doit être résolue localement et de manière coopérative. HybridIoT exploite à la fois des informations homogènes (informations du même type) et hétérogènes (informations de différents types ou unités) acquises à partir d'un capteur disponible pour fournir des estimations précises au point de l'environnement où un capteur n'est pas disponible. La technique proposée permet d'estimer des informations environnementales précises dans des conditions de variabilité résultant du contexte d'application urbaine dans lequel le projet est situé, et qui n'ont pas été explorées par les solutions de l'état de l'art : - ouverture : les capteurs peuvent entrer ou sortir du système à tout moment sans qu'aucune configuration particulière soit nécessaire ; - large échelle : le système peut être déployé dans un
contexte urbain Ă large Ă©chelle et assurer un fonctionnement correct avec un nombre significatif de dispositifs ;
- hétérogénéité : le système traite différents types d'informations sans aucune configuration a priori. Notre proposition ne nécessite aucun paramètre d'entrée ni aucune reconfiguration. Le système peut fonctionner dans des environnements ouverts et dynamiques tels que les villes, où un grand nombre de capteurs peuvent apparaître ou disparaître à tout moment et sans aucun préavis. Nous avons fait différentes expérimentations pour comparer les résultats obtenus à plusieurs techniques standard afin d'évaluer la validité de notre proposition. Nous avons également développé un ensemble de techniques standard pour produire des résultats de base qui seront comparés à ceux obtenus par notre proposition multi-agents.The development of sustainable smart cities requires the deployment of Information and Communication Technology (ICT) to ensure better services and available information at any time and everywhere. As IoT devices become more powerful and low-cost, the implementation of an extensive sensor network for an urban context can be expensive. This thesis proposes a technique for estimating missing environmental information in large scale environments. Our technique enables providing
information whereas devices are not available for an area of the environment not covered by sensing devices. The contribution of our proposal is summarized in the following points: * limiting the number of sensing devices to be
deployed in an urban environment; * the exploitation of heterogeneous data acquired from intermittent devices; * real-time processing of information; * self-calibration of the system. Our proposal uses the Adaptive Multi-Agent System (AMAS) approach to solve the problem of information unavailability. In this approach, an exception is considered as a Non-Cooperative Situation (NCS) that has to be solved locally and cooperatively. HybridIoT exploits both homogeneous (information of the same type) and heterogeneous information (information of different types or units) acquired from some available sensing device to provide accurate estimates in the point of the environment where a sensing device is not available. The proposed technique enables estimating accurate environmental information under conditions of uncertainty arising from the urban application context in which the project is situated, and which have not been explored by the state-of-the-art solutions: * openness: sensors can enter or leave the system at any time without the need for any reconfiguration; * large scale: the system can be deployed in a large, urban context and ensure correct operation with a significative number of devices; * heterogeneity: the system handles different types of information without any a priori configuration. Our proposal does not require any input parameters or reconfiguration. The system can operate in open, dynamic environments such as cities, where a large number of sensing devices can appear or disappear at any time and without any prior notification. We carried out different experiments to compare the obtained results to various standard techniques to assess the validity of our proposal. We also developed a pipeline of standard techniques to produce baseline results that will be compared to those obtained by our multi-agent proposal
Robot–City Interaction: Mapping the Research Landscape—A Survey of the Interactions Between Robots and Modern Cities
The goal of this work is to describe how robots interact with complex city environments, and to identify the main characteristics of an emerging field that we call Robot--City Interaction (RCI). Given the central role recently gained by modern cities as use cases for the deployment of advanced technologies, and the advancements achieved in the robotics field in recent years, we assume that there is an increasing interest both in integrating robots in urban ecosystems, and in studying how they can interact and benefit from each others. Therefore, our challenge becomes to verify the emergence of such area, to assess its current state and to identify the main characteristics, core themes and research challenges associated with it. This is achieved by reviewing a preliminary body of work contributing to this area, which we classify and analyze according to an analytical framework including a set of key dimensions for the area of RCI. Such review not only serves as a preliminary state-of-the-art in the area, but also allows us to identify the main characteristics of RCI and its research landscape
Incentive Mechanisms for Participatory Sensing: Survey and Research Challenges
Participatory sensing is a powerful paradigm which takes advantage of
smartphones to collect and analyze data beyond the scale of what was previously
possible. Given that participatory sensing systems rely completely on the
users' willingness to submit up-to-date and accurate information, it is
paramount to effectively incentivize users' active and reliable participation.
In this paper, we survey existing literature on incentive mechanisms for
participatory sensing systems. In particular, we present a taxonomy of existing
incentive mechanisms for participatory sensing systems, which are subsequently
discussed in depth by comparing and contrasting different approaches. Finally,
we discuss an agenda of open research challenges in incentivizing users in
participatory sensing.Comment: Updated version, 4/25/201
Internet of Things data contextualisation for scalable information processing, security, and privacy
The Internet of Things (IoT) interconnects billions of sensors and other devices (i.e., things) via the internet, enabling novel services and products that are becoming increasingly important for industry, government, education and society in general. It is estimated that by 2025, the number of IoT devices will exceed 50 billion, which is seven times the estimated human population at that time. With such a tremendous increase in the number of IoT devices, the data they generate is also increasing exponentially and needs to be analysed and secured more efficiently. This gives rise to what is appearing to be the most significant challenge for the IoT: Novel, scalable solutions are required to analyse and secure the extraordinary amount of data generated by tens of billions of IoT devices. Currently, no solutions exist in the literature that provide scalable and secure IoT scale data processing. In this thesis, a novel scalable approach is proposed for processing and securing IoT scale data, which we refer to as contextualisation. The contextualisation solution aims to exclude irrelevant IoT data from processing and address data analysis and security considerations via the use of contextual information. More specifically, contextualisation can effectively reduce the volume, velocity and variety of data that needs to be processed and secured in IoT applications. This contextualisation-based data reduction can subsequently provide IoT applications with the scalability needed for IoT scale knowledge extraction and information security. IoT scale applications, such as smart parking or smart healthcare systems, can benefit from the proposed method, which  improves the scalability of data processing as well as the security and privacy of data.   The main contributions of this thesis are: 1) An introduction to context and contextualisation for IoT applications; 2) a contextualisation methodology for IoT-based applications that is modelled around observation, orientation, decision and action loops; 3) a collection of contextualisation techniques and a corresponding software platform for IoT data processing (referred to as contextualisation-as-a-service or ConTaaS) that enables highly scalable data analysis, security and privacy solutions; and 4) an evaluation of ConTaaS in several IoT applications to demonstrate that our contextualisation techniques permit data analysis, security and privacy solutions to remain linear, even in situations where the number of IoT data points increases exponentially
From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey
Context data is in demand more than ever with the rapid increase in the
development of many context-aware Internet of Things applications. Research in
context and context-awareness is being conducted to broaden its applicability
in light of many practical and technical challenges. One of the challenges is
improving performance when responding to large number of context queries.
Context Management Platforms that infer and deliver context to applications
measure this problem using Quality of Service (QoS) parameters. Although
caching is a proven way to improve QoS, transiency of context and features such
as variability, heterogeneity of context queries pose an additional real-time
cost management problem. This paper presents a critical survey of
state-of-the-art in adaptive data caching with the objective of developing a
body of knowledge in cost- and performance-efficient adaptive caching
strategies. We comprehensively survey a large number of research publications
and evaluate, compare, and contrast different techniques, policies, approaches,
and schemes in adaptive caching. Our critical analysis is motivated by the
focus on adaptively caching context as a core research problem. A formal
definition for adaptive context caching is then proposed, followed by
identified features and requirements of a well-designed, objective optimal
adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys
Journal at this time of publishing in arxiv.or
A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity
Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed.
A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition.
First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development.
An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists
Developing a distributed electronic health-record store for India
The DIGHT project is addressing the problem of building a scalable and highly available information store for the Electronic Health Records (EHRs) of the over one billion citizens of India
- …