6,229 research outputs found

    A User's Guide: Do's and don'ts in data sharing

    Get PDF

    Agents in Bioinformatics

    No full text
    The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarise and reflect on the presentations and discussions

    Towards trusted volunteer grid environments

    Full text link
    Intensive experiences show and confirm that grid environments can be considered as the most promising way to solve several kinds of problems relating either to cooperative work especially where involved collaborators are dispersed geographically or to some very greedy applications which require enough power of computing or/and storage. Such environments can be classified into two categories; first, dedicated grids where the federated computers are solely devoted to a specific work through its end. Second, Volunteer grids where federated computers are not completely devoted to a specific work but instead they can be randomly and intermittently used, at the same time, for any other purpose or they can be connected or disconnected at will by their owners without any prior notification. Each category of grids includes surely several advantages and disadvantages; nevertheless, we think that volunteer grids are very promising and more convenient especially to build a general multipurpose distributed scalable environment. Unfortunately, the big challenge of such environments is, however, security and trust. Indeed, owing to the fact that every federated computer in such an environment can randomly be used at the same time by several users or can be disconnected suddenly, several security problems will automatically arise. In this paper, we propose a novel solution based on identity federation, agent technology and the dynamic enforcement of access control policies that lead to the design and implementation of trusted volunteer grid environments.Comment: 9 Pages, IJCNC Journal 201

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Innovative in silico approaches to address avian flu using grid technology

    Get PDF
    The recent years have seen the emergence of diseases which have spread very quickly all around the world either through human travels like SARS or animal migration like avian flu. Among the biggest challenges raised by infectious emerging diseases, one is related to the constant mutation of the viruses which turns them into continuously moving targets for drug and vaccine discovery. Another challenge is related to the early detection and surveillance of the diseases as new cases can appear just anywhere due to the globalization of exchanges and the circulation of people and animals around the earth, as recently demonstrated by the avian flu epidemics. For 3 years now, a collaboration of teams in Europe and Asia has been exploring some innovative in silico approaches to better tackle avian flu taking advantage of the very large computing resources available on international grid infrastructures. Grids were used to study the impact of mutations on the effectiveness of existing drugs against H5N1 and to find potentially new leads active on mutated strains. Grids allow also the integration of distributed data in a completely secured way. The paper presents how we are currently exploring how to integrate the existing data sources towards a global surveillance network for molecular epidemiology.Comment: 7 pages, submitted to Infectious Disorders - Drug Target

    Knowledge management overview of feature selection problem in high-dimensional financial data: Cooperative co-evolution and Map Reduce perspectives

    Get PDF
    The term big data characterizes the massive amounts of data generation by the advanced technologies in different domains using 4Vs volume, velocity, variety, and veracity-to indicate the amount of data that can only be processed via computationally intensive analysis, the speed of their creation, the different types of data, and their accuracy. High-dimensional financial data, such as time-series and space-Time data, contain a large number of features (variables) while having a small number of samples, which are used to measure various real-Time business situations for financial organizations. Such datasets are normally noisy, and complex correlations may exist between their features, and many domains, including financial, lack the al analytic tools to mine the data for knowledge discovery because of the high-dimensionality. Feature selection is an optimization problem to find a minimal subset of relevant features that maximizes the classification accuracy and reduces the computations. Traditional statistical-based feature selection approaches are not adequate to deal with the curse of dimensionality associated with big data. Cooperative co-evolution, a meta-heuristic algorithm and a divide-And-conquer approach, decomposes high-dimensional problems into smaller sub-problems. Further, MapReduce, a programming model, offers a ready-To-use distributed, scalable, and fault-Tolerant infrastructure for parallelizing the developed algorithm. This article presents a knowledge management overview of evolutionary feature selection approaches, state-of-The-Art cooperative co-evolution and MapReduce-based feature selection techniques, and future research directions

    Towards a cyberinfrastructure for enhanced scientific

    Get PDF
    A new generation of information and communication infrastructures, including advanced Internet computing and Grid technologies, promises to enable more direct and shared access to more widely distributed computing resources than was previously possible. Scientific and technological collaboration, consequently, is more and more coming to be seen as critically dependent upon effective access to, and sharing of digital research data, and of the information tools that facilitate data being structured for efficient storage, search, retrieval, display and higher level analysis. A recent (February 2003) report to the U.S. NSF Directorate of Computer and Information System Engineering urged that funding be provided for a major enhancement of computer and network technologies, thereby creating a cyberinfrastructure whose facilities would support and transform the conduct of scientific and engineering research. The articulation of this programmatic vision reflects a widely shared expectation that solving the technical engineering problems associated with the advanced hardware and software systems of the cyberinfrastructure will yield revolutionary payoffs by empowering individual researchers and increasing the scale, scope and flexibility of collective research enterprises. The argument of this paper, however, is that engineering breakthroughs alone will not be enough to achieve such an outcome; success in realizing the cyberinfrastructure’s potential, if it is achieved, will more likely to be the resultant of a nexus of interrelated social, legal and technical transformations. The socio-institutional elements of a new infrastructure supporting collaboration – that is to say, its supposedly “softer” parts -- are every bit as complicated as the hardware and computer software, and, indeed, may prove much harder to devise and implement. The roots of this latter class of challenges facing “e-Science” will be seen to lie in the micro- and meso-level incentive structures created by the existing legal and administrative regimes. Although a number of these same conditions and circumstances appear to be equally significant obstacles to commercial provision of Grid services in interorganizational contexts, the domain of publicly supported scientific collaboration is held to be the more hospitable environment in which to experiment with a variety of new approaches to solving these problems. The paper concludes by proposing several “solution modalities,” including some that also could be made applicable for fields of information-intensive collaboration in business and finance that must regularly transcends organizational boundaries.

    Towards a cyberinfrastructure for enhanced scientific

    Get PDF
    Scientific and technological collaboration is more and more coming to be seen as critically dependent upon effective access to, and sharing of digital research data, and of the information tools that facilitate data being structured for efficient storage, search, retrieval, display and higher level analysis. A February 2003 report to the U.S. NSF Directorate of Computer and Information System Engineering urged that funding be provided for a major enhancement of computer and network technologies, thereby creating a cyberinfrastructure whose facilities would support and transform the conduct of scientific and engineering research. The argument of this paper is that engineering breakthroughs alone will not be enough to achieve such an outcome; success in realizing the cyberinfrastructure’s potential, if it is achieved, will more likely to be the resultant of a nexus of interrelated social, legal and technical transformations. The socio-institutional elements of a new infrastructure supporting collaboration that is to say, its supposedly “softer” parts -- are every bit as complicated as the hardware and computer software, and, indeed, may prove much harder to devise and implement. The roots of this latter class of challenges facing “e- Science” will be seen to lie in the micro- and meso-level incentive structures created by the existing legal and administrative regimes. Although a number of these same conditions and circumstances appear to be equally significant obstacles to commercial provision of Grid services in interorganizational contexts, the domain of publicly supported scientific collaboration is held to be the more hospitable environment in which to experiment with a variety of new approaches to solving these problems. The paper concludes by proposing several “solution modalities,” including some that also could be made applicable for fields of information-intensive collaboration in business and finance that must regularly transcends organizational boundaries.
    • …
    corecore