3,001 research outputs found
ARM Wrestling with Big Data: A Study of Commodity ARM64 Server for Big Data Workloads
ARM processors have dominated the mobile device market in the last decade due
to their favorable computing to energy ratio. In this age of Cloud data centers
and Big Data analytics, the focus is increasingly on power efficient
processing, rather than just high throughput computing. ARM's first commodity
server-grade processor is the recent AMD A1100-series processor, based on a
64-bit ARM Cortex A57 architecture. In this paper, we study the performance and
energy efficiency of a server based on this ARM64 CPU, relative to a comparable
server running an AMD Opteron 3300-series x64 CPU, for Big Data workloads.
Specifically, we study these for Intel's HiBench suite of web, query and
machine learning benchmarks on Apache Hadoop v2.7 in a pseudo-distributed
setup, for data sizes up to files, web pages and tuples. Our
results show that the ARM64 server's runtime performance is comparable to the
x64 server for integer-based workloads like Sort and Hive queries, and only
lags behind for floating-point intensive benchmarks like PageRank, when they do
not exploit data parallelism adequately. We also see that the ARM64 server
takes the energy, and has an Energy Delay Product (EDP) that
is lower than the x64 server. These results hold promise for ARM64
data centers hosting Big Data workloads to reduce their operational costs,
while opening up opportunities for further analysis.Comment: Accepted for publication in the Proceedings of the 24th IEEE
International Conference on High Performance Computing, Data, and Analytics
(HiPC), 201
Verification of primitive Sub-Ghz RF replay attack techniques based on visual signal analysis
As the low-cost options for radio traffic capture, analysis and transmission are becoming available, some security researchers have developed open-source tools that potentially make it easier to assess the security of the devices that rely on radio communications without the need for extensive knowledge and understanding of the associated concepts. Recent research in this area suggests that primitive visual analysis techniques may be applied to decode selected radio signals successfully. This study builds upon the previous research in the area of sub-GHz radio communications and aims to outline the associated methodology as well as verify some of the reported techniques for carrying out radio frequency replay attacks using low-cost materials and freely available software
Doctor of Philosophy
dissertationIn-memory big data applications are growing in popularity, including in-memory versions of the MapReduce framework. The move away from disk-based datasets shifts the performance bottleneck from slow disk accesses to memory bandwidth. MapReduce is a data-parallel application, and is therefore amenable to being executed on as many parallel processors as possible, with each processor requiring high amounts of memory bandwidth. We propose using Near Data Computing (NDC) as a means to develop systems that are optimized for in-memory MapReduce workloads, offering high compute parallelism and even higher memory bandwidth. This dissertation explores three different implementations and styles of NDC to improve MapReduce execution. First, we use 3D-stacked memory+logic devices to process the Map phase on compute elements in close proximity to database splits. Second, we attempt to replicate the performance characteristics of the 3D-stacked NDC using only commodity memory and inexpensive processors to improve performance of both Map and Reduce phases. Finally, we incorporate fixed-function hardware accelerators to improve sorting performance within the Map phase. This dissertation shows that it is possible to improve in-memory MapReduce performance by potentially two orders of magnitude by designing system and memory architectures that are specifically tailored to that end
Cloud-based or On-device: An Empirical Study of Mobile Deep Inference
Modern mobile applications are benefiting significantly from the advancement
in deep learning, e.g., implementing real-time image recognition and
conversational system. Given a trained deep learning model, applications
usually need to perform a series of matrix operations based on the input data,
in order to infer possible output values. Because of computational complexity
and size constraints, these trained models are often hosted in the cloud. To
utilize these cloud-based models, mobile apps will have to send input data over
the network. While cloud-based deep learning can provide reasonable response
time for mobile apps, it restricts the use case scenarios, e.g. mobile apps
need to have network access. With mobile specific deep learning optimizations,
it is now possible to employ on-device inference. However, because mobile
hardware, such as GPU and memory size, can be very limited when compared to its
desktop counterpart, it is important to understand the feasibility of this new
on-device deep learning inference architecture. In this paper, we empirically
evaluate the inference performance of three Convolutional Neural Networks
(CNNs) using a benchmark Android application we developed. Our measurement and
analysis suggest that on-device inference can cost up to two orders of
magnitude greater response time and energy when compared to cloud-based
inference, and that loading model and computing probability are two performance
bottlenecks for on-device deep inferences.Comment: Accepted at The IEEE International Conference on Cloud Engineering
(IC2E) conference 201
Costs and benefits of superfast broadband in the UK
This paper was commissioned from LSE Enterprise by Convergys Smart Revenue Solutions to stimulate an open and constructive debate among the main stakeholders about the balance between the costs, the revenues, and the societal benefits of ‘superfast’ broadband. The intent has been to analyse the available facts and to propose wider perspectives on economic and social interactions. The paper has two parts: one concentrates on superfast broadband deployment and the associated economic and social implications (for the UK and its service providers), and the other considers alternative social science approaches to these implications. Both parts consider the potential contribution of smart solutions to superfast broadband provision and use. Whereas Part I takes the “national perspective” and the “service provider perspective”, which deal with the implications of superfast broadband for the UK and for service providers, Part II views matters in other ways, particularly by looking at how to realise values beyond the market economy, such as those inherent in neighbourliness, trust and democrac
- …