65,939 research outputs found
Can intelligent optimisation techniques improve computing job scheduling in a Grid environment? review, problem and proposal
In the existing Grid scheduling literature, the reported methods and strategies are mostly related to high-level schedulers such as global schedulers, external schedulers, data schedulers, and cluster schedulers. Although a number of these have previously considered job scheduling, thus far only relatively simple queue-based policies such as First In First Out (FIFO) have been considered for local job scheduling within Grid contexts. Our initial research shows that it is worth investigating the potential impact on the performance of the Grid when intelligent optimisation techniques are applied to local scheduling policies. The research problem is defined, and a basic research methodology with a detailed roadmap is presented. This paper forms a proposal with the intention of exchanging ideas and seeking potential collaborators
The Strategy of the Commons: Modelling the Annual Cost of Successful ICT Services for European Research
The provision of ICT services for research is increasingly using Cloud services to complement the traditional federation of computing centres. Due to the complex funding structure and differences in the basic business model, comparing the cost-effectiveness of these options requires a new approach to cost assessment. This paper presents a cost assessment method addressing the limitations of the standard methods and some of the initial results of the study. This acts as an illustration of the kind of cost assessment issues high-utilisation rate ICT services should consider when choosing between different infrastructure options. The research is co-funded by the European Commission Seventh Framework Programme through the e-FISCAL project (contract number RI-283449)
Mining Knowledge in Astrophysical Massive Data Sets
Modern scientific data mainly consist of huge datasets gathered by a very
large number of techniques and stored in very diversified and often
incompatible data repositories. More in general, in the e-science environment,
it is considered as a critical and urgent requirement to integrate services
across distributed, heterogeneous, dynamic "virtual organizations" formed by
different resources within a single enterprise. In the last decade, Astronomy
has become an immensely data rich field due to the evolution of detectors
(plates to digital to mosaics), telescopes and space instruments. The Virtual
Observatory approach consists into the federation under common standards of all
astronomical archives available worldwide, as well as data analysis, data
mining and data exploration applications. The main drive behind such effort
being that once the infrastructure will be completed, it will allow a new type
of multi-wavelength, multi-epoch science which can only be barely imagined.
Data Mining, or Knowledge Discovery in Databases, while being the main
methodology to extract the scientific information contained in such MDS
(Massive Data Sets), poses crucial problems since it has to orchestrate complex
problems posed by transparent access to different computing environments,
scalability of algorithms, reusability of resources, etc. In the present paper
we summarize the present status of the MDS in the Virtual Observatory and what
is currently done and planned to bring advanced Data Mining methodologies in
the case of the DAME (DAta Mining & Exploration) project.Comment: Pages 845-849 1rs International Conference on Frontiers in
Diagnostics Technologie
Panel on future challenges in modeling methodology
This panel paper presents the views of six researchers and practitioners of simulation modeling. Collectively we attempt to address a range of key future challenges to modeling methodology. It is hoped that the views of this paper, and the presentations made by the panelists at the 2004 Winter Simulation Conference will raise awareness and stimulate further discussion on the future of modeling methodology in areas such as modeling problems in business applications, human factors and geographically dispersed networks; rapid model development and maintenance; legacy modeling approaches; markup languages; virtual interactive process design and simulation; standards; and Grid computing
Recommended from our members
Smart labs and social practice: social tools for pervasive laboratory workspaces: a position paper
The emergence of pervasive and ubiquitous computing stimulates a view of future work environments where sharing of information, data and knowledge is easy and commonplace, particularly in highly interactive settings. Much of the work in this area focuses on tool development to support activities such as data collection, data recording and sharing, and so on. We are interested in this kind of technical development, which is both challenging and essential for science communities. But we are also interested in a broader interpretation of knowledge sharing and the human/social side of tools we develop to support this. We are keen to know more about how groups of different kinds of scientists can make their work understandable and shareable with each other in a multidisciplinary setting. This is a complex task because boundaries and barriers can emerge between disciplines engendered by differences in discourses and practices, which may not easily translate into other discipline areas. In the worst case, there may be some hostility between disciplines, or at least doubt and scepticism. Nevertheless, sharing approaches to research, research expertise, data and methods across disciplines can be a very fruitful exercise, and encouragement to engage in this activity is particularly pertinent in the digital era. Issues of privacy and security are also key aspects – knowing when and how to release data or information to other groups is crucial to providing a safe environment for people to work, and there are several sensitivities to be explored here.
In this paper we describe an evolving situation that captures many of these issues, which we aim to track longitudinally
Managing Uncertainty: A Case for Probabilistic Grid Scheduling
The Grid technology is evolving into a global, service-orientated
architecture, a universal platform for delivering future high demand
computational services. Strong adoption of the Grid and the utility computing
concept is leading to an increasing number of Grid installations running a wide
range of applications of different size and complexity. In this paper we
address the problem of elivering deadline/economy based scheduling in a
heterogeneous application environment using statistical properties of job
historical executions and its associated meta-data. This approach is motivated
by a study of six-month computational load generated by Grid applications in a
multi-purpose Grid cluster serving a community of twenty e-Science projects.
The observed job statistics, resource utilisation and user behaviour is
discussed in the context of management approaches and models most suitable for
supporting a probabilistic and autonomous scheduling architecture
- …