247 research outputs found
A smartwater metering deployment based on the fog computing paradigm
In this paper, we look into smart water metering infrastructures that enable continuous, on-demand and bidirectional data exchange between metering devices, water flow equipment, utilities and end-users. We focus on the design, development and deployment of such infrastructures as part of larger, smart city, infrastructures. Until now, such critical smart city infrastructures have been developed following a cloud-centric paradigm where all the data are collected and processed centrally using cloud services to create real business value. Cloud-centric approaches need to address several performance issues at all levels of the network, as massive metering datasets are transferred to distant machine clouds while respecting issues like security and data privacy. Our solution uses the fog computing paradigm to provide a system where the computational resources already available throughout the network infrastructure are utilized to facilitate greatly the analysis of fine-grained water consumption data collected by the smart meters, thus significantly reducing the overall load to network and cloud resources. Details of the system's design are presented along with a pilot deployment in a real-world environment. The performance of the system is evaluated in terms of network utilization and computational performance. Our findings indicate that the fog computing paradigm can be applied to a smart grid deployment to reduce effectively the data volume exchanged between the different layers of the architecture and provide better overall computational, security and privacy capabilities to the system
Secure Control and Operation of Energy Cyber-Physical Systems Through Intelligent Agents
The operation of the smart grid is expected to be heavily reliant on microprocessor-based control. Thus, there is a strong need for interoperability standards to address the heterogeneous nature of the data in the smart grid. In this research, we analyzed in detail the security threats of the Generic Object Oriented Substation Events (GOOSE) and Sampled Measured Values (SMV) protocol mappings of the IEC 61850 data modeling standard, which is the most widely industry-accepted standard for power system automation and control. We found that there is a strong need for security solutions that are capable of defending the grid against cyber-attacks, minimizing the damage in case a cyber-incident occurs, and restoring services within minimal time.
To address these risks, we focused on correlating cyber security algorithms with physical characteristics of the power system by developing intelligent agents that use this knowledge as an important second line of defense in detecting malicious activity. This will complement the cyber security methods, including encryption and authentication. Firstly, we developed a physical-model-checking algorithm, which uses artificial neural networks to identify switching-related attacks on power systems based on load flow characteristics.
Secondly, the feasibility of using neural network forecasters to detect spoofed sampled values was investigated. We showed that although such forecasters have high spoofed-data-detection accuracy, they are prone to the accumulation of forecasting error. In this research, we proposed an algorithm to detect the accumulation of the forecasting error based on lightweight statistical indicators. The effectiveness of the proposed algorithms was experimentally verified on the Smart Grid testbed at FIU. The test results showed that the proposed techniques have a minimal detection latency, in the range of microseconds.
Also, in this research we developed a network-in-the-loop co-simulation platform that seamlessly integrates the components of the smart grid together, especially since they are governed by different regulations and owned by different entities. Power system simulation software, microcontrollers, and a real communication infrastructure were combined together to provide a cohesive smart grid platform. A data-centric communication scheme was selected to provide an interoperability layer between multi-vendor devices, software packages, and to bridge different protocols together
6G White Paper on Edge Intelligence
In this white paper we provide a vision for 6G Edge Intelligence. Moving towards 5G and beyond the future 6G networks, intelligent solutions utilizing data-driven machine learning and artificial intelligence become crucial for several real-world applications including but not limited to, more efficient manufacturing, novel personal smart device environments and experiences, urban computing and autonomous traffic settings. We present edge computing along with other 6G enablers as a key component to establish the future 2030 intelligent Internet technologies as shown in this series of 6G White Papers. In this white paper, we focus in the domains of edge computing infrastructure and platforms, data and edge network management, software development for edge, and real-time and distributed training of ML/AI algorithms, along with security, privacy, pricing, and end-user aspects. We discuss the key enablers and challenges and identify the key research questions for the development of the Intelligent Edge services. As a main outcome of this white paper, we envision a transition from Internet of Things to Intelligent Internet of Intelligent Things and provide a roadmap for development of 6G Intelligent Edge
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
Enhanced Living Environments
This open access book was prepared as a Final Publication of the COST Action IC1303 âAlgorithms, Architectures and Platforms for Enhanced Living Environments (AAPELE)â. The concept of Enhanced Living Environments (ELE) refers to the area of Ambient Assisted Living (AAL) that is more related with Information and Communication Technologies (ICT). Effective ELE solutions require appropriate ICT algorithms, architectures, platforms, and systems, having in view the advance of science and technology in this area and the development of new and innovative solutions that can provide improvements in the quality of life for people in their homes and can reduce the financial burden on the budgets of the healthcare providers. The aim of this book is to become a state-of-the-art reference, discussing progress made, as well as prompting future directions on theories, practices, standards, and strategies related to the ELE area. The book contains 12 chapters and can serve as a valuable reference for undergraduate students, post-graduate students, educators, faculty members, researchers, engineers, medical doctors, healthcare organizations, insurance companies, and research strategists working in this area
Edge Computing for Extreme Reliability and Scalability
The massive number of Internet of Things (IoT) devices and their continuous data collection will lead to a rapid increase in the scale of collected data. Processing all these collected data at the central cloud server is inefficient, and even is unfeasible or unnecessary. Hence, the task of processing the data is pushed to the network edges introducing the concept of Edge Computing. Processing the information closer to the source of data (e.g., on gateways and on edge micro-servers) not only reduces the huge workload of central cloud, also decreases the latency for real-time applications by avoiding the unreliable and unpredictable network latency to communicate with the central cloud
CoWSAMI: Interface-aware context gathering in ambient intelligence environments
International audienceNo abstrac
Recommended from our members
An Emergent Architecture for Scaling Decentralized Communication Systems (DCS)
With recent technological advancements now accelerating the mobile and wireless Internet solution space, a ubiquitous computing Internet is well within the research and industrial community's design reach - a decentralized system design, which is not solely driven by static physical models and sound engineering principals, but more dynamically, perhaps sub-optimally at initial deployment and socially-influenced in its evolution. To complement today's Internet system, this thesis proposes a Decentralized Communication System (DCS) architecture with the following characteristics: flat physical topologies with numerous compute oriented and communication intensive nodes in the network with many of these nodes operating in multiple functional roles; self-organizing virtual structures formed through alternative mobility scenarios and capable of serving ad hoc networking formations; emergent operations and control with limited dependency on centralized control and management administration. Today, decentralized systems are not commercially scalable or viable for broad adoption in the same way we have to come to rely on the Internet or telephony systems. The premise in this thesis is that DCS can reach high levels of resilience, usefulness, scale that the industry has come to experience with traditional centralized systems by exploiting the following properties: (i.) network density and topological diversity; (ii.) self-organization and emergent attributes; (iii.) cooperative and dynamic infrastructure; and (iv.) node role diversity. This thesis delivers key contributions towards advancing the current state of the art in decentralized systems. First, we present the vision and a conceptual framework for DCS. Second, the thesis demonstrates that such a framework and concept architecture is feasible by prototyping a DCS platform that exhibits the above properties or minimally, demonstrates that these properties are feasible through prototyped network services. Third, this work expands on an alternative approach to network clustering using hierarchical virtual clusters (HVC) to facilitate self-organizing network structures. With increasing network complexity, decentralized systems can generally lead to unreliable and irregular service quality, especially given unpredictable node mobility and traffic dynamics. The HVC framework is an architectural strategy to address organizational disorder associated with traditional decentralized systems. The proposed HVC architecture along with the associated promotional methodology organizes distributed control and management services by leveraging alternative organizational models (e.g., peer-to-peer (P2P), centralized or tiered) in hierarchical and virtual fashion. Through simulation and analytical modeling, we demonstrate HVC efficiencies in DCS structural scalability and resilience by comparing static and dynamic HVC node configurations against traditional physical configurations based on P2P, centralized or tiered structures. Next, an emergent management architecture for DCS exploiting HVC for self-organization, introduces emergence as an operational approach to scaling DCS services for state management and policy control. In this thesis, emergence scales in hierarchical fashion using virtual clustering to create multiple tiers of local and global separation for aggregation, distribution and network control. Emergence is an architectural objective, which HVC introduces into the proposed self-management design for scaling and stability purposes. Since HVC expands the clustering model hierarchically and virtually, a clusterhead (CH) node, positioned as a proxy for a specific cluster or grouped DCS nodes, can also operate in a micro-capacity as a peer member of an organized cluster in a higher tier. As the HVC promotional process continues through the hierarchy, each tier of the hierarchy exhibits emergent behavior. With HVC as the self-organizing structural framework, a multi-tiered, emergent architecture enables the decentralized management strategy to improve scaling objectives that traditionally challenge decentralized systems. The HVC organizational concept and the emergence properties align with and the view of the human brain's neocortex layering structure of sensory storage, prediction and intelligence. It is the position in this thesis, that for DCS to scale and maintain broad stability, network control and management must strive towards an emergent or natural approach. While today's models for network control and management have proven to lack scalability and responsiveness based on pure centralized models, it is unlikely that singular organizational models can withstand the operational complexities associated with DCS. In this work, we integrate emergence and learning-based methods in a cooperative computing manner towards realizing DCS self-management. However, unlike many existing work in these areas which break down with increased network complexity and dynamics, the proposed HVC framework is utilized to offset these issues through effective separation, aggregation and asynchronous processing of both distributed state and policy. Using modeling techniques, we demonstrate that such architecture is feasible and can improve the operational robustness of DCS. The modeling emphasis focuses on demonstrating the operational advantages of an HVC-based organizational strategy for emergent management services (i.e., reachability, availability or performance). By integrating the two approaches, the DCS architecture forms a scalable system to address the challenges associated with traditional decentralized systems. The hypothesis is that the emergent management system architecture will improve the operational scaling properties of DCS-based applications and services. Additionally, we demonstrate structural flexibility of HVC as an underlying service infrastructure to build and deploy DCS applications and layered services. The modeling results demonstrate that an HVC-based emergent management and control system operationally outperforms traditional structural organizational models. In summary, this thesis brings together the above contributions towards delivering a scalable, decentralized system for Internet mobile computing and communications
Proceedings of the 2004 ONR Decision-Support Workshop Series: Interoperability
In August of 1998 the Collaborative Agent Design Research Center (CADRC) of the California Polytechnic State University in San Luis Obispo (Cal Poly), approached Dr. Phillip Abraham of the Office of Naval Research (ONR) with the proposal for an annual workshop focusing on emerging concepts in decision-support systems for military applications. The proposal was considered timely by the ONR Logistics Program Office for at least two reasons. First, rapid advances in information systems technology over the past decade had produced distributed collaborative computer-assistance capabilities with profound potential for providing meaningful support to military decision makers. Indeed, some systems based on these new capabilities such as the Integrated Marine Multi-Agent Command and Control System (IMMACCS) and the Integrated Computerized Deployment System (ICODES) had already reached the field-testing and final product stages, respectively.
Second, over the past two decades the US Navy and Marine Corps had been increasingly challenged by missions demanding the rapid deployment of forces into hostile or devastate dterritories with minimum or non-existent indigenous support capabilities. Under these conditions Marine Corps forces had to rely mostly, if not entirely, on sea-based support and sustainment operations. Particularly today, operational strategies such as Operational Maneuver From The Sea (OMFTS) and Sea To Objective Maneuver (STOM) are very much in need of intelligent, near real-time and adaptive decision-support tools to assist military commanders and their staff under conditions of rapid change and overwhelming data loads.
In the light of these developments the Logistics Program Office of ONR considered it timely to provide an annual forum for the interchange of ideas, needs and concepts that would address the decision-support requirements and opportunities in combined Navy and Marine Corps sea-based warfare and humanitarian relief operations. The first ONR Workshop was held April 20-22, 1999 at the Embassy Suites Hotel in San Luis Obispo, California. It focused on advances in technology with particular emphasis on an emerging family of powerful computer-based tools, and concluded that the most able members of this family of tools appear to be computer-based agents that are capable of communicating within a virtual environment of the real world. From 2001 onward the venue of the Workshop moved from the West Coast to Washington, and in 2003 the sponsorship was taken over by ONRâs Littoral Combat/Power Projection (FNC) Program Office (Program Manager: Mr. Barry Blumenthal). Themes and keynote speakers of past Workshops have included:
1999: âCollaborative Decision Making Toolsâ Vadm Jerry Tuttle (USN Ret.); LtGen Paul Van Riper (USMC Ret.);Radm Leland Kollmorgen (USN Ret.); and, Dr. Gary Klein (KleinAssociates)
2000: âThe Human-Computer Partnership in Decision-Supportâ Dr. Ronald DeMarco (Associate Technical Director, ONR); Radm CharlesMunns; Col Robert Schmidle; and, Col Ray Cole (USMC Ret.)
2001: âContinuing the Revolution in Military Affairsâ Mr. Andrew Marshall (Director, Office of Net Assessment, OSD); and,Radm Jay M. Cohen (Chief of Naval Research, ONR)
2002: âTransformation ... â Vadm Jerry Tuttle (USN Ret.); and, Steve Cooper (CIO, Office ofHomeland Security)
2003: âDeveloping the New Infostructureâ Richard P. Lee (Assistant Deputy Under Secretary, OSD); and, MichaelOâNeil (Boeing)
2004: âInteroperabilityâ MajGen Bradley M. Lott (USMC), Deputy Commanding General, Marine Corps Combat Development Command; Donald Diggs, Director, C2 Policy, OASD (NII
- âŠ