24,309 research outputs found
Tripod of Requirements in Horizontal Heterogeneous Mobile Cloud Computing
Recent trend of mobile computing is emerging toward executing
resource-intensive applications in mobile devices regardless of underlying
resource restrictions (e.g. limited processor and energy) that necessitate
imminent technologies. Prosperity of cloud computing in stationary computers
breeds Mobile Cloud Computing (MCC) technology that aims to augment computing
and storage capabilities of mobile devices besides conserving energy. However,
MCC is more heterogeneous and unreliable (due to wireless connectivity) compare
to cloud computing. Problems like variations in OS, data fragmentation, and
security and privacy discourage and decelerate implementation and pervasiveness
of MCC. In this paper, we describe MCC as a horizontal heterogeneous ecosystem
and identify thirteen critical metrics and approaches that influence on
mobile-cloud solutions and success of MCC. We divide them into three major
classes, namely ubiquity, trust, and energy efficiency and devise a tripod of
requirements in MCC. Our proposed tripod shows that success of MCC is
achievable by reducing mobility challenges (e.g. seamless connectivity,
fragmentation), increasing trust, and enhancing energy efficiency
Mirroring Mobile Phone in the Clouds
This paper presents a framework of Mirroring Mobile Phone in the Clouds (MMPC) to speed up data/computing intensive applications on a mobile phone by taking full advantage of the super computing power of the clouds. An application on the mobile phone is dynamically partitioned in such a way that the heavy-weighted part is always running on a mirrored server in the clouds while the light-weighted part remains on the mobile phone. A performance improvement (an energy consumption reduction of 70% and a speed-up of 15x) is achieved at the cost of the communication overhead between the mobile phone and the clouds (to transfer the application codes and intermediate results) of a desired application. Our original contributions include a dynamic profiler and a dynamic partitioning algorithm compared with traditional approaches of either statically partitioning a mobile application or modifying a mobile application to support the required partitioning
Mobile Computing in Physics Analysis - An Indicator for eScience
This paper presents the design and implementation of a Grid-enabled physics
analysis environment for handheld and other resource-limited computing devices
as one example of the use of mobile devices in eScience. Handheld devices offer
great potential because they provide ubiquitous access to data and
round-the-clock connectivity over wireless links. Our solution aims to provide
users of handheld devices the capability to launch heavy computational tasks on
computational and data Grids, monitor the jobs status during execution, and
retrieve results after job completion. Users carry their jobs on their handheld
devices in the form of executables (and associated libraries). Users can
transparently view the status of their jobs and get back their outputs without
having to know where they are being executed. In this way, our system is able
to act as a high-throughput computing environment where devices ranging from
powerful desktop machines to small handhelds can employ the power of the Grid.
The results shown in this paper are readily applicable to the wider eScience
community.Comment: 8 pages, 7 figures. Presented at the 3rd Int Conf on Mobile Computing
& Ubiquitous Networking (ICMU06. London October 200
Profitable Task Allocation in Mobile Cloud Computing
We propose a game theoretic framework for task allocation in mobile cloud
computing that corresponds to offloading of compute tasks to a group of nearby
mobile devices. Specifically, in our framework, a distributor node holds a
multidimensional auction for allocating the tasks of a job among nearby mobile
nodes based on their computational capabilities and also the cost of
computation at these nodes, with the goal of reducing the overall job
completion time. Our proposed auction also has the desired incentive
compatibility property that ensures that mobile devices truthfully reveal their
capabilities and costs and that those devices benefit from the task allocation.
To deal with node mobility, we perform multiple auctions over adaptive time
intervals. We develop a heuristic approach to dynamically find the best time
intervals between auctions to minimize unnecessary auctions and the
accompanying overheads. We evaluate our framework and methods using both real
world and synthetic mobility traces. Our evaluation results show that our game
theoretic framework improves the job completion time by a factor of 2-5 in
comparison to the time taken for executing the job locally, while minimizing
the number of auctions and the accompanying overheads. Our approach is also
profitable for the nearby nodes that execute the distributor's tasks with these
nodes receiving a compensation higher than their actual costs
ENORM: A Framework For Edge NOde Resource Management
Current computing techniques using the cloud as a centralised server will
become untenable as billions of devices get connected to the Internet. This
raises the need for fog computing, which leverages computing at the edge of the
network on nodes, such as routers, base stations and switches, along with the
cloud. However, to realise fog computing the challenge of managing edge nodes
will need to be addressed. This paper is motivated to address the resource
management challenge. We develop the first framework to manage edge nodes,
namely the Edge NOde Resource Management (ENORM) framework. Mechanisms for
provisioning and auto-scaling edge node resources are proposed. The feasibility
of the framework is demonstrated on a PokeMon Go-like online game use-case. The
benefits of using ENORM are observed by reduced application latency between 20%
- 80% and reduced data transfer and communication frequency between the edge
node and the cloud by up to 95\%. These results highlight the potential of fog
computing for improving the quality of service and experience.Comment: 14 pages; accepted to IEEE Transactions on Services Computing on 12
September 201
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things
The number of connected sensors and devices is expected to increase to billions in the near
future. However, centralised cloud-computing data centres present various challenges to meet the
requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput
and bandwidth constraints. Edge computing is becoming the standard computing paradigm for
latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related
to centralised cloud-computing models. Such a paradigm relies on bringing computation close to
the source of data, which presents serious operational challenges for large-scale cloud-computing
providers. In this work, we present an architecture composed of low-cost Single-Board-Computer
clusters near to data sources, and centralised cloud-computing data centres. The proposed
cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT
workload requirements while keeping scalability. We include an extensive empirical analysis to
assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data
centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud
architectures, and evaluate them through extensive simulation. We finally show that acquisition costs
can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209
- …