20 research outputs found
C-RAM: Breaking Mobile Device Memory Barriers Using the Cloud
Mobile applications are constrained by the available memory of mobile devices. We present C-RAM, a system that uses cloud-based memory to extend the memory of mobile devices. It splits application state and its associated computation between a mobile device and a cloud node to allow applications to consume more memory, while minimising the performance impact. C-RAM thus enables developers to realise new applications or port legacy desktop applications with a large memory footprint to mobile platforms without explicitly designing them to account for memory limitations. To handle network failures with partitioned application state, C-RAM uses a new snapshot-based fault tolerance mechanism in which changes to remote memory objects are periodically backed up to the device. After failure, or when network usage exceeds a given limit, the device rolls back execution to continue from the last snapshot. C-RAM supports local execution with an application state that exceeds the available device memory through a user-level virtual memory: objects are loaded on-demand from snapshots in flash memory. Our C-RAM prototype supports Objective-C applications on the unmodified iOS platform. With C-RAM, applications can consume 10× more memory than the device capacity, with a negligible impact on application performance. In some cases, C-RAM even achieves a significant speed-up in execution time (up to 9.7×)
Edge Reduce: Eliminating Mobile Network Traffic Using Application-Specific Edge Proxies
Mobile carriers are struggling to cope with the surge in smartphone traffic, which reflects badly on end users who often experience poor connectivity in densely populated urban environments. Data transfers between mobile client applications and their Internet backend services contribute significantly to the contention in radio access networks (RANs). Client applications, however, typically transfer unnecessary data because (i) backend service APIs do not support a fine-grained specification of the data actually required by clients and (ii) clients aggressively prefetch data that is never used.
We describe EDGEREDUCE, an automated approach for reducing the data transmitted from backend services to a mobile device. Based on source-level program analysis, EDGEREDUCE generates application-specific proxies for mobile client applications that execute part of the application logic at the network edge to filter data returned by backend API calls and only send used data to the client. EDGEREDUCE also permits the tuning of aggressive prefetching strategies: proxies replace large prefetched objects such as images by futures, whose access by the client triggers the retrieval of the object on-demand. We show that EDGEREDUCE reduces the RAN traffic for real-world iOS client applications by up to 8×, with only a modest increase in response time
Towards a Back-End Framework for Supporting Affective Avatar-Based Interaction Systems
Avatar-based systems provide an intuitive way of interacting with users in the context of Ambient Assisted Living (AAL). These
systems are typically supported by a diverse set of services for, e.g, social daily activities, leisure, education and safety. This paper studies the importance of specific services for two organizations, namely MRPS in Geneva, Switzerland and ORBIS in Sittard, Netherlands. Based on this study, we present the design of a backend framework that supports Avatar interaction by means of a comprehensive set of services for safe and independent living
Towards Enabling Hyper-Responsive Mobile Apps Through Network Edge Assistance
Poor Internet performance currently undermines the efficiency of hyper-responsive mobile apps such as augmented reality clients and online games, which require low-latency access to real-time backend services. While edge-assisted execution, i.e. moving entire services to the edge of an access network, helps eliminate part of the communication overhead involved, this does not scale to the number of users that share an edge infrastructure. This is due to a mismatch between the scarce availability of resources in access networks and the aggregate demand for computational power from client applications.
Instead, this paper proposes a hybrid edge-assisted deployment model in which only part of a service executes on LTE edge servers. We provide insights about the conditions that must hold for such a model to be effective by investigating in simulation different deployment and application scenarios. In particular, we show that using LTE edge servers with modest capabilities, performance can improve significantly as long as at most 50% of client requests are processed at the edge. Moreover, we argue that edge servers should be installed at the core of a mobile network, rather than the mobile base station: the difference in performance is negligible, whereas the latter choice entails high deployment costs. Finally, we verify that, for the proposed model, the impact of user mobility on TCP performance is low
ETC: energy-driven tree construction in wireless sensor networks
Continuous queries in Wireless Sensor Networks (WSNs) are founded on the premise of Query Routing Tree structures (denoted as T), which provide sensors with a path to the querying node. Predominant data acquisition systems for WSNs construct such structures in an ad-hoc manner and therefore there is no guarantee that a given query workload will be distributed equally among all sensors. That leads to data collisions which represent a major source of energy waste. In this paper we present the Energy-driven Tree Construction (ETC) algorithm, which balances the workload among nodes and minimizes data collisions, thus reducing energy consumption, during data acquisition in WSNs. We show through real micro-benchmarks on the CC2420 radio chip and trace-driven experimentation with real datasets from Intel Research and UCBerkeley that ETC can provide significant energy reductions under a variety of conditions prolonging the longevity of a wireless sensor network
MULTI-WEAR: A Multi-Wearable Platform for Enhancing Mobile Experiences
The uptake of wearable technology suggests that the time is ripe to explore new opportunities for improving mobile experiences. Apps, however, are not keeping up with the pace of technological advancement because wearables are treated as standalone devices, although their individual capabilities better classify them as peripherals with complementary roles. We foresee that the next generation of apps will orchestrate multiple wearable devices to enhance mobile user experiences. However, currently there is limited support for combining heterogeneous devices. This paper introduces Multi-Wear, a platform to scaffold the development of apps that span multiple wearables. It demonstrates experimentally how MULTI-WEAR can help bring changes to mobile apps that go beyond conventional practices
FogFS: A Fog File System For Hyper-Responsive Mobile Applications
Hyper-responsive mobile applications}, such as augmented reality and online games, require ultra-low latency access to back-end services and data at runtime. While fog computing tries to meet such latency requirements by placing corresponding back-end services and data closer to clients, for e.g., within an access network, assuming a fixed back-end server throughout execution is problematic due to user mobility. A more flexible approach is thus required to allow for adapting to changes in network conditions when users roam, by relocating back-end services and data to closer available infrastructure. Support for real-time migration of software services exists, however, migrating associated disk state remains a bottleneck. This paper presents FOGFS, a fog file system that employs intelligent snapshotting, migration and synchronization mechanisms to speed up the migration of an application‘s disk state between different edge locations at runtime. The experimental evaluation of our prototype implementation reveals that the attainable speed-up is as much as 3. 3 x compared to conventional migration approaches
Optimized query routing trees for wireless sensor networks
In order to process continuous queries over Wireless Sensor Networks (WSNs), sensors are typically organized in a Query Routing Tree (denoted as T) that provides each sensor with a path over which query results can be transmitted to the querying node. We found that current methods deployed in predominant data acquisition systems construct T in a sub-optimal manner which leads to significant waste of energy. In particular, since T is constructed in an ad hoc manner there is no guarantee that a given query workload will be distributed equally among all sensors. That leads to data collisions which represent a major source of energy waste. Additionally, current methods only provide a topological-based method, rather than a query-based method, to define the interval during which a sensing device should enable its transceiver in order to collect the query results from its children. We found that this imposes an order of magnitude increase in energy consumption.
In this paper we present MicroPulse+, a novel framework for minimizing the consumption of energy during data acquisition in WSNs. MicroPulse+ continuously optimizes the operation of T by eliminating data transmission and data reception inefficiencies using a collection of in-network algorithms. In particular, MicroPulse+ introduces: (i) the Workload-Aware Routing Tree (WART) algorithm, which is established on profiling recent data acquisition activity and on identifying the bottlenecks using an in-network execution of the critical path method; and (ii) the Energy-driven Tree Construction (ETC) algorithm, which balances the workload among nodes and minimizes data collisions. We show through micro-benchmarks on the CC2420 radio chip and trace-driven experimentation with real datasets from Intel Research and UC-Berkeley that MicroPulse+ provides significant energy reductions under a variety of conditions thus prolonging the longevity of a wireless sensor network
Mobile code offloading for multiple resources
Mobile devices are becoming pervasive, yet a persistent gap in hardware capabilities still separates them from desktop machines. To bridge this gap, recent research has turned to cloud-assisted execution as a way of leveraging remote resources to enhance application performance. Code-offloading systems automatically partition applications across resource-constrained devices and more powerful remote nodes to improve execution. Existing approaches, however, only focus on compute resources, ignoring memory and network limitations in mobile environments. In doing so, they prevent mobile applications from taking advantage of the larger memory and richer networking capabilities of cloud-based nodes. At the same time, they face the challenge that a large runtime overhead may offset the benefits of offloaded execution and support only applications written in managed programming languages with substantial runtime support. In this thesis, we propose three new static code-offloading approaches that exploit all three remote resources—compute, memory and network:
(1) Compute-focused offloading enables applications written in unmanaged programming languages with only rudimentary runtime support to benefit from remote compute resources. Using offline dynamic profiling to analyse runtime behaviour, it derives a partitioning that reduces response times by offloading compute-intensive functionality to the remote node.
(2) Memory-focused offloading partitions application state across nodes to alleviate memory constraints and reduce offloading overheads by permanently collocating data and computation. To handle network failures, it uses a snapshot-based fault tolerance mechanism to back up state changes locally and a user-level virtual memory scheme to support execution with large state sizes after failure. (3) Network-focused offloading partitions mobile client applications across mobile devices and nodes at edge locations of a mobile network to minimise network traffic in radio access networks. It (i) discards unused data returned by coarse-grained API calls to Internet backend services and (ii) tunes binary object prefetching strategies to transmit only the objects that are used on the device.Open Acces