176 research outputs found
System Abstractions for Scalable Application Development at the Edge
Recent years have witnessed an explosive growth of Internet of Things (IoT) devices, which collect or generate huge amounts of data. Given diverse device capabilities and application requirements, data processing takes place across a range of settings, from on-device to a nearby edge server/cloud and remote cloud. Consequently, edge-cloud coordination has been studied extensively from the perspectives of job placement, scheduling and joint optimization. Typical approaches focus on performance optimization for individual applications. This often requires domain knowledge of the applications, but also leads to application-specific solutions. Application development and deployment over diverse scenarios thus incur repetitive manual efforts. There are two overarching challenges to provide system-level support for application development at the edge. First, there is inherent heterogeneity at the device hardware level. The execution settings may range from a small cluster as an edge cloud to on-device inference on embedded devices, differing in hardware capability and programming environments. Further, application performance requirements vary significantly, making it even more difficult to map different applications to already heterogeneous hardware. Second, there are trends towards incorporating edge and cloud and multi-modal data. Together, these add further dimensions to the design space and increase the complexity significantly. In this thesis, we propose a novel framework to simplify application development and deployment over a continuum of edge to cloud. Our framework provides key connections between different dimensions of design considerations, corresponding to the application abstraction, data abstraction and resource management abstraction respectively. First, our framework masks hardware heterogeneity with abstract resource types through containerization, and abstracts away the application processing pipelines into generic flow graphs. Further, our framework further supports a notion of degradable computing for application scenarios at the edge that are driven by multimodal sensory input. Next, as video analytics is the killer app of edge computing, we include a generic data management service between video query systems and a video store to organize video data at the edge. We propose a video data unit abstraction based on a notion of distance between objects in the video, quantifying the semantic similarity among video data. Last, considering concurrent application execution, our framework supports multi-application offloading with device-centric control, with a userspace scheduler service that wraps over the operating system scheduler
Recommended from our members
Reasoning About User Feedback Under Identity Uncertainty in Knowledge Base Construction
Intelligent, automated systems that are intertwined with everyday life---such as Google Search and virtual assistants like Amazon’s Alexa or Apple’s Siri---are often powered in part by knowledge bases (KBs), i.e., structured data repositories of entities, their attributes, and the relationships among them. Despite a wealth of research focused on automated KB construction methods, KBs are inevitably imperfect, with errors stemming from various points in the construction pipeline. Making matters more challenging, new data is created daily and must be integrated with existing KBs so that they remain up-to-date. As the primary consumers of KBs, human users have tremendous potential to aid in KB construction by contributing feedback that identifies spurious and missing entity attributes and relations. However, correctly integrating user feedback with an existing KB is complicated by the necessity to resolve identity uncertainty, i.e., uncertainty regarding to which real-world entity a piece of data refers. Identity uncertainty abounds in the collection of raw evidence from which a KB is built. Moreover, it also gives rise to identity uncertainty in user feedback, when KB entities, which were affected by user feedback, are split or merged.
In this dissertation, we present a continuous reasoning framework capable of integrating user feedback with a KB, under identity certainty. To begin, we introduce Grinch, an online entity resolution (ER) algorithm---with provable correctness guarantees---capable of merging and splitting KB entities as new data arrives. We show that Grinch is efficient and achieves state-of-the-art performance in ER as well as in clustering. Next, we propose a method for using Grinch to resolve identity uncertainty in a KB\u27s underlying data as well as in user feedback. Our approach is based on representing user feedback as mentions, i.e., first class KB objects that participate in all parts of KB construction. Furthermore, we introduce a structured representation for feedback comprised of packaging and payload, which facilitates recovery from KB errors that stem from both identity uncertainty and noisy data. Finally, we evaluate our framework\u27s efficacy using data from the KB that supports OpenReview.net---a deployed, conference management system that solicits feedback from users. The demands of OpenReview.net lead us to develop XGrinch-Shallow (XGS), a variant of Grinch that builds trees with arbitrary branching factors, and subsequently instantiates 60% fewer internal nodes than Grinch. Empirically, we show that XGS is efficient, and is able to effectively utilize user feedback to improve the correctness and completeness of the OpenReview.net KB. We conclude with 7 concrete suggestions for future research on this topic
A Distributed Audit Trail for the Internet of Things
Sharing Internet of Things (IoT) data over open-data platforms and digital data
marketplaces can reduce infrastructure investments, improve sustainability by
reducing the required resources, and foster innovation. However, due to the
inability to audit the authenticity, integrity, and quality of IoT data, third-party
data consumers cannot assess the trustworthiness of received data. Therefore,
it is challenging to use IoT data obtained from third parties for quality-relevant
applications. To overcome this limitation, the IoT data must be auditable. Distributed
Ledger Technology (DLT) is a promising approach for building auditable
systems. However, the existing solutions do not integrate authenticity,
integrity, data quality, and location into an all-encompassing auditable model
and only focus on specific parts of auditability.
This thesis aims to provide a distributed audit trail that makes the IoT auditable
and enables sharing of IoT data between multiple organizations for
quality relevant applications. Therefore, we designed and evaluated the Veritaa
framework. The Veritaa framework comprises the Graph of Trust (GoT) as
distributed audit trail and a DLT to immutably store the transactions that build
the GoT. The contributions of this thesis are summarized as follows. First, we
designed and evaluated the GoT a DLT-based Distributed Public Key Infrastructure
(DPKI) with a signature store. Second, we designed a Distributed
Calibration Certificate Infrastructure (DCCI) based on the GoT, which makes
quality-relevant maintenance information of IoT devices auditable. Third, we
designed an Auditable Positioning System (APS) to make positions in the IoT
auditable. Finally, we designed an Location Verification System (LVS) to verify
location claims and prevent physical layer attacks against the APS. All these
components are integrated into the GoT and build the distributed audit trail.
We implemented a real-world testbed to evaluate the proposed distributed audit
trail. This testbed comprises several custom-built IoT devices connectable
over Long Range Wide Area Network (LoRaWAN) or Long-Term Evolution
Category M1 (LTE Cat M1), and a Bluetooth Low Energy (BLE)-based Angle
of Arrival (AoA) positioning system. All these low-power devices can manage
their identity and secure their data on the distributed audit trail using the IoT
client of the Veritaa framework. The experiments suggest that a distributed
audit trail is feasible and secure, and the low-power IoT devices are capable
of performing the required cryptographic functions. Furthermore, the energy
overhead introduced by making the IoT auditable is limited and reasonable
for quality-relevant applications
A Proposal for A High Availability Architecture for VoIP Telephone Systems based on Open Source Software
The inherent needs of organizations to improve and amplify their technological platform entail large expenses with the goal to enhance their performance. Hence, they have to contemplate mechanisms of optimization and the improvement of their operational infrastructure. In this direction arises the need to guarantee the correct operation and non-degradation of the services provided by the platform during the periods with a significant load of work. This type of scenario is perfectly applicable to the field of VoIP technologies, where users generate elevated loads of work on critical points of the infrastructure, during the process of interaction with their peers. In this research work, we propose a solution for high availability, with the goal of maintaining the continuity of the operation of communication environments based on the SIP protocol in high load. We validate our proposal through numerous experiments. Also, we compare our solution with other classical VoIP scenarios and show the advantages of a high availability and fault tolerance architecture for organizations
Autonomous migration of vertual machines for maximizing resource utilization
Virtualization of computing resources enables multiple virtual machines to run on a physical machine. When many virtual machines are deployed on a cluster of PCs, some physical machines will inevitably experience overload while others are under-utilized over time due to varying computational demands. This computational imbalance across the cluster undermines the very purpose of maximizing resource utilization through virtualization. To solve this imbalance problem, virtual machine migration has been introduced, where a virtual machine on a heavily loaded physical machine is selected and moved to a lightly loaded physical machine. The selection of the source virtual machine and the destination physical machine is based on a single fixed threshold value. Key to such threshold-based VM migration is to determine when to move which VM to what physical machine, since wrong or inadequate decisions can cause unnecessary migrations that would adversely affect the overall performance. The fixed threshold may not necessarily work for different computing infrastructures. Finding the optimal threshold is critical.
In this research, a virtual machine migration framework is presented that autonomously finds and adjusts variable thresholds at runtime for different computing requirements to improve and maximize the utilization of computing resources. Central to this approach is the previous history of migrations and their effects before and after each migration in terms of standard deviation of utilization. To broaden this research, a proactive learning methodology is introduced that not only accumulates the past history of computing patterns and resulting migration decisions but more importantly searches all possibilities for the most suitable decisions.
This research demonstrates through experimental results that the learning approach autonomously finds thresholds close to the optimal ones for different computing scenarios and that such varying thresholds yield an optimal number of VM migrations for maximizing resource utilization. The proposed framework is set up on a cluster of 8 and 16 PCs, each of which has multiple User-Mode Linux (UML)-based virtual machines. An extensive set of benchmark programs is deployed to closely resemble a real-world computing environment.
Experimental results indicate that the proposed framework indeed autonomously finds thresholds close to the optimal ones for different computing scenarios, balances the load across the cluster through autonomous VM migration, and improves the overall performance of the dynamically changing computing environment
VLSI design of configurable low-power coarse-grained array architecture
Biomedical signal acquisition from in- or on-body sensors often requires local (on-node) low-level pre-processing before the data are sent to a remote node for aggregation and further processing. Local processing is required for many different operations, which include signal cleanup (noise removal), sensor calibration, event detection and data compression. In this environment, processing is subject to aggressive energy consumption restrictions, while often operating under real-time requirements. These conflicting requirements impose the use of dedicated circuits addressing a very specific task or the use of domain-specific customization to obtain significant gains in power efficiency. However, economic and time-to-market constraints often make the development or use of application-specific platforms very risky.One way to address these challenges is to develop a sensor node with a general-purpose architecture combining a low-power, low-performance general microprocessor or micro-controller with a coarse-grained reconfigurable array (CGRA) acting as an accelerator. A CGRA consists of a fixed number of processing units (e.g., ALUs) whose function and interconnections are determined by some configuration data.The objective of this work is to create an RTL-level description of a low-power CGRA of ALUs and produce a low-power VLSI (standard cell) implementation, that supports power-saving features.The CGRA implementation should use as few resources as possible and fully exploit the intended operation environment. The design will be evaluated with a set of simple signal processing task
Advanced Process Monitoring for Industry 4.0
This book reports recent advances on Process Monitoring (PM) to cope with the many challenges raised by the new production systems, sensors and “extreme data” conditions that emerged with Industry 4.0. Concepts such as digital-twins and deep learning are brought to the PM arena, pushing forward the capabilities of existing methodologies to handle more complex scenarios. The evolution of classical paradigms such as Latent Variable modeling, Six Sigma and FMEA are also covered. Applications span a wide range of domains such as microelectronics, semiconductors, chemicals, materials, agriculture, as well as the monitoring of rotating equipment, combustion systems and membrane separation processes
- …