585 research outputs found
Supporting simulation in industry through the application of grid computing
An increased need for collaborative research, together with continuing advances in communication technology and computer hardware, has facilitated the development of distributed systems that can provide users access to geographically dispersed computing resources that are administered in multiple computer domains. The term grid computing, or grids, is popularly used to refer to such distributed systems. Simulation is characterized by the need to run multiple sets of computationally intensive experiments. Large scale scientific simulations have traditionally been the primary benefactor of grid computing. The application of this technology to simulation in industry has, however, been negligible. This research investigates how grid technology can be effectively exploited by users to model simulations in industry. It introduces our desktop grid, WinGrid, and presents a case study conducted at a leading European investment bank. Results indicate that grid computing does indeed hold promise for simulation in industry
Grid-enabling FIRST: Speeding up simulation applications using WinGrid
The vision of grid computing is to make computational power, storage capacity, data and applications available to users as readily as electricity and other utilities. Grid infrastructures and applications have traditionally been geared towards dedicated, centralized, high performance clusters running on UNIX flavour operating systems (commonly referred to as cluster-based grid computing). This can be contrasted with desktop-based grid computing which refers to the aggregation of non-dedicated, de-centralized, commodity PCs connected through a network and running (mostly) the Microsoft Windowstrade operating system. Large scale adoption of such Windowstrade-based grid infrastructure may be facilitated via grid-enabling existing Windows applications. This paper presents the WinGridtrade approach to grid enabling existing Windowstrade based commercial-off-the-shelf (COTS) simulation packages (CSPs). Through the use of a case study developed in conjunction with Ford Motor Company, the paper demonstrates how experimentation with the CSP Witnesstrade and FIRST can achieve a linear speedup when WinGridtrade is used to harness idle PC computing resources. This, combined with the lessons learned from the case study, has encouraged us to develop the Web service extensions to WinGridtrade. It is hoped that this would facilitate wider acceptance of WinGridtrade among enterprises having stringent security policies in place
Investigating grid computing technologies for use with commercial simulation packages
As simulation experimentation in industry become more computationally demanding, grid computing can be seen as a promising technology that has the potential to bind together the computational resources needed to quickly execute such simulations. To investigate how this might be possible, this paper reviews the grid technologies that can be used together with commercial-off-the-shelf simulation packages (CSPs) used in industry. The paper identifies two specific forms of grid computing (Public Resource Computing and Enterprise-wide Desktop Grid Computing) and the middleware associated with them (BOINC and Condor) as being suitable for grid-enabling existing CSPs. It further proposes three different CSP-grid integration approaches and identifies one of them to be the most appropriate. It is hoped that this research will encourage simulation practitioners to consider grid computing as a technologically viable means of executing CSP-based experiments faster
Content rendering and interaction technologies for digital heritage systems
Existing digital heritage systems accommodate a huge amount of digital repository information; however their content rendering and interaction components generally lack the more interesting functionality that allows better interaction with heritage contents. Many digital heritage libraries are simply collections of 2D images with associated metadata and textual content, i.e. little more than museum catalogues presented online. However, over the last few years, largely as a result of EU framework projects, some 3D representation of digital heritage objects are beginning to appear in a digital library context. In the cultural heritage domain, where researchers and museum visitors like to observe cultural objects as closely as possible and to feel their existence and use in the past, giving the user only 2D images along with textual descriptions significantly limits interaction and hence understanding of their heritage.
The availability of powerful content rendering technologies, such as 3D authoring tools to create 3D objects and heritage scenes, grid tools for rendering complex 3D scenes, gaming engines to display 3D interactively, and recent advances in motion capture technologies for embodied immersion, allow the development of unique solutions for enhancing user experience and interaction with digital heritage resources and objects giving a higher level of understanding and greater benefit to the community.
This thesis describes DISPLAYS (Digital Library Services for Playing with Shared Heritage Resources), which is a novel conceptual framework where five unique services are proposed for digital content: creation, archival, exposition, presentation and interaction services. These services or tools are designed to allow the heritage community to create, interpret, use and explore digital heritage resources organised as an online exhibition (or virtual museum). This thesis presents innovative solutions for two of these services or tools: content creation where a cost effective render grid is proposed; and an interaction service, where a heritage scenario is presented online using a real-time motion capture and digital puppeteer solution for the user to explore through embodied immersive interaction their digital heritage
Dispatch: distributed peer-to-peer simulations
Recently there has been an increasing demand for efficient mechanisms of carrying out computations that exhibit coarse grained parallelism. Examples of this class
of problems include simulations involving Monte Carlo methods, computations where
numerous, similar but independent, tasks are performed to solve a large problem or
any solution which relies on ensemble averages where a simulation is run under a variety of initial conditions which are then combined to form the result. With the ever
increasing complexity of such applications, large amounts of computational power are
required over a long period of time. Economic constraints entail deploying specialized
hardware to satisfy this ever increasing computing power.
We address this issue in Dispatch, a peer-to-peer framework for sharing computational power. In contrast to grid computing and other institution-based CPU sharing
systems, Dispatch targets an open environment, one that is accessible to all the users
and does not require any sort of membership or accounts, i.e. any machine connected
to the Internet can be the part of framework. Dispatch allows dynamic and decentralized organization of these computational resources. It empowers users to utilize
heterogeneous computational resources spread across geographic and administrative
boundaries to run their tasks in parallel.
As a first step, we address a number of challenging issues involved in designing
such distributed systems. Some of these issues are forming a decentralized and scalable network of computational resources, finding sufficient number of idle CPUs in
the network for participants, allocating simulation tasks in an optimal manner so as to reduce the computation time, allowing new participants to join the system and run
their task irrespective of their geographical location and facilitating users to interact
with their tasks (pausing, resuming, stopping) in real time and implementing security
features for preventing malicious users from compromising the network and remote
machines.
As a second step, we evaluate the performance of Dispatch on a large-scale network consisting of 10−130 machines. For one particular simulation, we were able
to achieve up to 1500 million iterations per second as compared to 10 million iterations per second on one machine. We also test Dispatch over a wide-area network
where it is deployed on machines that are geographically apart and belong to different
domains
Monte Carlo validation of a mu-SPECT imaging system on the lightweight grid CiGri
à paraître dans Future Generation Computer SystemsMonte Carlo Simulations (MCS) are nowadays widely used in the field of nuclear medicine for system and algorithms designs. They are valuable for accurately reproducing experimental data, but at the expense of a long computing time. An efficient solution for shorter elapsed time has recently been proposed: grid computing. The aim of this work is to validate a small animal gamma camera MCS and to confirm the usefulness of grid computing for such a study. Good matches between measured and simulated data were achieved and a crunching factor up to 70 was attained on a lightweight campus grid
Deploying an Ad-Hoc Computing Cluster Overlaid on Top of Public Desktops
A computer laboratory is often a homogeneous environment, in which the computers have the same hardware and software settings. Conducting system tests in this laboratory environment is quite challenging, as the laboratory is supposed to be shared with regular classes. This manuscript details the use of desktop virtualization to deploy dynamically a virtual cluster for testing and ad-hoc purposes. The virtual cluster can support an environment completely different from the physical environment and provide application isolation essential for separating the testing environment from the regular class activities. Windows 7 OS was running in the host desktops, and VMware Workstation was employed as the desktop virtualization manager. The deployed virtual cluster comprised virtual desktops installed with Ubuntu Desktop Linux OS. Lightweight applications using VMware VIX library and shell scripts were developed and employed to manage job submission to the virtual cluster. Evaluations on the virtual cluster’s deployment show that we can leverage on desktop virtualization to quickly and dynamically deploy a testing environment while exploiting the underutilized compute resources
WSN simulators evaluation: an approach focusing on energy awareness
The large number of Wireless Sensor Networks (WSN) simulators available
nowadays, differ in their design, goals, and characteristics. Users who have to
decide which simulator is the most appropriate for their particular
requirements, are today lost, faced with a panoply of disparate and diverse
simulators. Hence, it is obvious the need for establishing guidelines that
support users in the tasks of selecting a simulator to suit their preferences
and needs. In previous works, we proposed a generic and novel approach to
evaluate networks simulators, considering a methodological process and a set of
qualitative and quantitative criteria. In particularly, for WSN simulators, the
criteria include relevant aspects for this kind of networks, such as energy
consumption modelling and scalability capacity. The aims of this work are: (i)
describe deeply the criteria related to WSN aspects; (ii) extend and update the
state of the art of WSN simulators elaborated in our previous works to identify
the most used and cited in scientific articles; and (iii) demonstrate the
suitability of our novel methodological approach by evaluating and comparing
the three most cited simulators, specially in terms of energy modelling and
scalability capacities. Results show that our proposed approach provides
researchers with an evaluation tool that can be used to describe and compare
WSN simulators in order to select the most appropriate one for a given scenarioComment: 20 Page
Methodology to Evaluate WSN Simulators: Focusing on Energy Consumption Awareness
ISBN: 978-1-925953-09-1International audienceNowadays, there exists a large number of available network simulators, that differ in their design, goals, and characteristics. Users who have to decide which simulator is the most appropriate for their particular requirements, are today lost, faced with a panoply of disparate and diverse simulators. Hence, it is obvious the need for establishing guidelines that support users in the tasks of selecting and customizing a simulator to suit their preferences and needs. In previous works, we proposed a generic and novel methodological approach to evaluate network simulators, considering a set of qualitative and quantitative criteria. However, it lacks criteria related to Wireless Sensor Networks (WSN). Thus, the aim of this work is three fold: (i) extend the previous proposed methodology to include the evaluation of WSN simulators, such as energy consumption modelling and scalability; (ii) elaborate a study of the state of the art of WSN simulators, with the intention of identifying the most used and cited in scientific articles; and (iii) demonstrate the suitability of our novel methodology by evaluating and comparing three of the most cited simulators. Our novel methodology provides researchers with an evaluation tool that can be used to describe and compare WSN simulators in order to select the most appropriate one for a given scenario
- …