4,613 research outputs found
TechNews digests: Jan - Nov 2008
TechNews is a technology, news and analysis service aimed at anyone in the education sector keen to stay informed about technology developments, trends and issues. TechNews focuses on emerging technologies and other technology news. TechNews service : digests september 2004 till May 2010 Analysis pieces and News combined publish every 2 to 3 month
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things
The number of connected sensors and devices is expected to increase to billions in the near
future. However, centralised cloud-computing data centres present various challenges to meet the
requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput
and bandwidth constraints. Edge computing is becoming the standard computing paradigm for
latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related
to centralised cloud-computing models. Such a paradigm relies on bringing computation close to
the source of data, which presents serious operational challenges for large-scale cloud-computing
providers. In this work, we present an architecture composed of low-cost Single-Board-Computer
clusters near to data sources, and centralised cloud-computing data centres. The proposed
cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT
workload requirements while keeping scalability. We include an extensive empirical analysis to
assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data
centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud
architectures, and evaluate them through extensive simulation. We finally show that acquisition costs
can be drastically reduced while keeping performance levels in data-intensive IoT use cases.Ministerio de Economía y Competitividad TIN2017-82113-C2-1-RMinisterio de Economía y Competitividad RTI2018-098062-A-I00European Union’s Horizon 2020 No. 754489Science Foundation Ireland grant 13/RC/209
Distributed deep learning inference in fog networks
Today's smart devices are equipped with powerful integrated chips and built-in heterogeneous sensors that can leverage their potential to execute heavy computation and produce a large amount of sensor data. For instance, modern smart cameras integrate artificial intelligence to capture images that detect any objects in the scene and change parameters, such as contrast and color based on environmental conditions. The accuracy of the object recognition and classification achieved by intelligent applications has improved due to recent advancements in artificial intelligence (AI) and machine learning (ML), particularly, deep neural networks (DNNs).
Despite the capability to carry out some AI/ML computation, smart devices have limited battery power and computing resources. Therefore, DNN computation is generally offloaded to powerful computing nodes such as cloud servers. However, it is challenging to satisfy latency, reliability, and bandwidth constraints in cloud-based AI. Thus, in recent years, AI services and tasks have been pushed closer to the end-users by taking advantage of the fog computing paradigm to meet these requirements. Generally, the trained DNN models are offloaded to the fog devices for DNN inference. This is accomplished by partitioning the DNN and distributing the computation in fog networks.
This thesis addresses offloading DNN inference by dividing and distributing a pre-trained network onto heterogeneous embedded devices. Specifically, it implements the adaptive partitioning and offloading algorithm based on matching theory proposed in an article, titled "Distributed inference acceleration with adaptive dnn partitioning and offloading". The implementation was evaluated in a fog testbed, including Nvidia Jetson nano devices. The obtained results show that the adaptive solution outperforms other schemes (Random and Greedy) with respect to computation time and communication latency
MEC vs MCC: performance analysis of real-time applications
Hoje em dia, numerosas são as aplicações que apresentam um uso intensivo de recursos empurrando os requisitos computacionais e a demanda de energia dos dispositivos para além das suas capacidades. Atentando na arquitetura Mobile Cloud, que disponibiliza plataformas funcionais e aplicações emergentes (como Realidade Aumentada (AR), Realidade Virtual (VR), jogos online em tempo real, etc.), são evidentes estes desafios directamente relacionados com a latência, consumo de energia, e requisitos de privacidade. O Mobile Edge Computing (MEC) é uma tecnologia recente que aborda os obstáculos de desempenho enfrentados pela Mobile Cloud Computing (MCC), procurando solucioná-los O MEC aproxima as funcionalidades de computação e de armazenamento da periferia da rede. Neste trabalho descreve-se a arquitetura MEC assim como os principais tipos soluções para a sua implementação. Apresenta-se a arquitetura de referência da tecnologia cloudlet e uma comparação com o modelo de arquitetura ainda em desenvolvimento e padronização pelo ETSI. Um dos propósitos do MEC é permitir remover dos dispositivos tarefas intensivas das aplicações para melhorar a computação, a capacidade de resposta e a duração da bateria dos dispositivos móveis. O objetivo deste trabalho é estudar, comparar e avaliar o desempenho das arquiteturas MEC e MCC para o provisionamento de tarefas intensivas de aplicações com uso intenso de computação. Os cenários de teste foram configurados utilizando esse tipo de aplicações em ambas as implementações de MEC e MCC. Os resultados do teste deste estudo permitem constatar que o MEC apresenta melhor desempenho do que o MCC relativamente à latência e à qualidade de experiência do utilizador. Além disso, os resultados dos testes permitem quantificar o benefício efetivo tecnologia MEC.Numerous applications, such as Augmented Reality (AR), Virtual Reality (VR), real-time online gaming are resource-intensive applications and consequently, are pushing the computational requirements and energy demands of the mobile devices beyond their capabilities. Despite the fact that mobile cloud architecture has practical and functional platforms, these new emerging applications present several challenges regarding latency, energy consumption, context awareness, and privacy enhancement. Mobile Edge Computing (MEC) is a new resourceful and intermediary technology, that addresses the performance hurdles faced by Mobile Cloud Computing (MCC), and brings computing and storage closer to the network edge. This work introduces the MEC architecture and some of edge computing implementations. It presents the reference architecture of the cloudlet technology and provides a comparison with the architecture model that is under standardization by ETSI. MEC can offload intensive tasks from applications to enhance computation, responsiveness and battery life of the mobile devices. The objective of this work is to study and evaluate the performance of MEC and MCC architectures for provisioning offload intensive tasks from compute-intensive applications. Test scenarios were set up with use cases with this kind of applications for both MEC and MCC implementations. The test results of this study enable to support evidence that the MEC presents better performance than cloud computing regarding latency and user quality of experience. Moreover, the results of the tests enable to quantify the effective benefit of the MEC approach
ARM Wrestling with Big Data: A Study of Commodity ARM64 Server for Big Data Workloads
ARM processors have dominated the mobile device market in the last decade due
to their favorable computing to energy ratio. In this age of Cloud data centers
and Big Data analytics, the focus is increasingly on power efficient
processing, rather than just high throughput computing. ARM's first commodity
server-grade processor is the recent AMD A1100-series processor, based on a
64-bit ARM Cortex A57 architecture. In this paper, we study the performance and
energy efficiency of a server based on this ARM64 CPU, relative to a comparable
server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads.
Specifically, we study these for Intel's HiBench suite of web, query and
machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed
setup, for data sizes up to files, web pages and tuples. Our
results show that the ARM64 server's runtime performance is comparable to the
x64 server for integer-based workloads like Sort and Hive queries, and only
lags behind for floating-point intensive benchmarks like PageRank, when they do
not exploit data parallelism adequately. We also see that the ARM64 server
takes the energy, and has an Energy Delay Product (EDP) that
is lower than the x64 server. These results hold promise for ARM64
data centers hosting Big Data workloads to reduce their operational costs,
while opening up opportunities for further analysis.Comment: Accepted for publication in the Proceedings of the 24th IEEE
International Conference on High Performance Computing, Data, and Analytics
(HiPC), 201
- …