1,510 research outputs found
Array languages and the N-body problem
This paper is a description of the contributions to the SICSA multicore challenge on many body
planetary simulation made by a compiler group at the University of Glasgow. Our group is part of
the Computer Vision and Graphics research group and we have for some years been developing array
compilers because we think these are a good tool both for expressing graphics algorithms and for
exploiting the parallelism that computer vision applications require.
We shall describe experiments using two languages on two different platforms and we shall compare
the performance of these with reference C implementations running on the same platforms. Finally
we shall draw conclusions both about the viability of the array language approach as compared to
other approaches used in the challenge and also about the strengths and weaknesses of the two, very
different, processor architectures we used
Comparing Languages for Engineering Server Software: Erlang, Go, and Scala with Akka
Servers are a key element of current IT infrastructures, and must often deal with large numbers of concurrent requests. The programming language used to construct the server has an important role in engineering efficient server software, and must support massive concurrency on multicore machines with low communication and synchronisation overheads. This paper investigates 12 highly concurrent programming languages suitable for engineering servers, and analyses three representative languages in detail: Erlang, Go, and Scala with Akka. We have designed three server benchmarks that analyse key performance characteristics of the languages. The benchmark results suggest that where minimising message latency is crucial, Go and Erlang are best; that Scala with Akka is capable of supporting the largest number of dormant processes; that for servers that frequently spawn processes Erlang and Go minimise creation time; and that for constantly communicating processes Go provides the best throughput
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
Next Generation Cloud Computing: New Trends and Research Directions
The landscape of cloud computing has significantly changed over the last
decade. Not only have more providers and service offerings crowded the space,
but also cloud infrastructure that was traditionally limited to single provider
data centers is now evolving. In this paper, we firstly discuss the changing
cloud infrastructure and consider the use of infrastructure from multiple
providers and the benefit of decentralising computing away from data centers.
These trends have resulted in the need for a variety of new computing
architectures that will be offered by future cloud infrastructure. These
architectures are anticipated to impact areas, such as connecting people and
devices, data-intensive computing, the service space and self-learning systems.
Finally, we lay out a roadmap of challenges that will need to be addressed for
realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
Constructing a gazebo: supporting teamwork in a tightly coupled, distributed task in virtual reality
Many tasks require teamwork. Team members may work concurrently, but there must be some occasions of coming together. Collaborative virtual environments (CVEs) allow distributed teams to come together across distance to share a task. Studies of CVE systems have tended to focus on the sense of presence or copresence with other people. They have avoided studying close interaction between us-ers, such as the shared manipulation of objects, because CVEs suffer from inherent network delays and often have cumbersome user interfaces. Little is known about the ef-fectiveness of collaboration in tasks requiring various forms of object sharing and, in particular, the concurrent manipu-lation of objects.
This paper investigates the effectiveness of supporting teamwork among a geographically distributed group in a task that requires the shared manipulation of objects. To complete the task, users must share objects through con-current manipulation of both the same and distinct at-tributes. The effectiveness of teamwork is measured in terms of time taken to achieve each step, as well as the impression of users. The effect of interface is examined by comparing various combinations of walk-in cubic immersive projection technology (IPT) displays and desktop devices
An Approach to Ad hoc Cloud Computing
We consider how underused computing resources within an enterprise may be
harnessed to improve utilization and create an elastic computing
infrastructure. Most current cloud provision involves a data center model, in
which clusters of machines are dedicated to running cloud infrastructure
software. We propose an additional model, the ad hoc cloud, in which
infrastructure software is distributed over resources harvested from machines
already in existence within an enterprise. In contrast to the data center cloud
model, resource levels are not established a priori, nor are resources
dedicated exclusively to the cloud while in use. A participating machine is not
dedicated to the cloud, but has some other primary purpose such as running
interactive processes for a particular user. We outline the major
implementation challenges and one approach to tackling them
Budget Constrained Execution of Multiple Bag-of-Tasks Applications on the Cloud
Optimising the execution of Bag-of-Tasks (BoT) applications on the cloud is a
hard problem due to the trade- offs between performance and monetary cost. The
problem can be further complicated when multiple BoT applications need to be
executed. In this paper, we propose and implement a heuristic algorithm that
schedules tasks of multiple applications onto different cloud virtual machines
in order to maximise performance while satisfying a given budget constraint.
Current approaches are limited in task scheduling since they place a limit on
the number of cloud resources that can be employed by the applications.
However, in the proposed algorithm there are no such limits, and in comparison
with other approaches, the algorithm on average achieves an improved
performance of 10%. The experimental results also highlight that the algorithm
yields consistent performance even with low budget constraints which cannot be
achieved by competing approaches.Comment: 8th IEEE International Conference on Cloud Computing (CLOUD 2015
Visualisation of Parallel Data Streams with Temporal Mosaics
Despite its popularity and widespread use, timeline visualisation suffers from shortcomings which limit its use for displaying multiple data streams when the number of streams increases to more than a handful. This paper presents the TemporalMosaic technique for visualisation of parallel time-based streams which addresses some of these shortcomings. Temporal mosaics provide a compact way of representing parallel streams of events by allocating a fixed drawing area to time intervals and partitioning that area according to the number of concurrent events. A user study is presented which compares this technique to a standard timeline representation technique in which events are depicted as horizontal bars and multiple streams are drawn in parallel along a vertical axis. Results of this user study show that users of the temporal mosaic visualisation perform significantly better at detecting concurrency, interval overlaps and inactivity than users of standard timelines
- …