93,270 research outputs found

    Applying agent technology to facilitate knowledge sharing among bioinformatics communities of practice

    Get PDF
    Agent technology is a software programs or tool that could be developed and used to facilitate and support the community of practice (CoP) especially the bioinformatics communities who are involved in managing bioinformatics activities such as collective and organized, and shared knowledge in term of best practice and lesson learnt in order to perform their worked for better solutions and competativeness. This paper describes on the theoretical concept and approach of agent technology framework that could be developed in implementing bioinfomatics knowledge management system (BKMS) in order to facilitate knowledge sharing among bioinformatics communities as well as to demonstrates it into the system wise; on how the agent technology could be utilized in the BKMS as a system model for serving the communities that is developed by using groupware such as Lotus Notes software. The achievement in conducting this framework of the BKMS is an added value for the any organization that need to implement the BKMS as a system, which could be helped the CoPs to work together in achieving their aims and mission statements. The emphasis also will be given to the BKMS activities that may concern for agent technology to help the CoPs especially in working collaboratively including critical success factor (CSF) in order to ensure that BKMS initiatives would be delivered competitive advantage for the CoP as well as thier organization

    Technical perspectives on knowledge management system in bioinformatics environment

    Get PDF
    The knowledge growth in the Bioinformatics environment needs a system that can organize and manage the biological knowledge.Knowledge Management System is appropriately for acquiring, storing, applying and disseminating that biological knowledge.In order to develop the system, the technical perspectives of Knowledge Management should take into account.In this paper, we discuss the technical perspectives of Knowledge Management System and the suitable method and technologies to be implemented in the system

    A notification system model for bioinformatics community of practice

    Get PDF
    Bioinformatics can be considered as a new field of study and it promises a vast exploration area (Carzaniga., Rosenblum., Wolf ., 2001). In order to expedite the maturity in this area, a proper and supportive portal where all researchers could gather and cooperate in conducting their research need to be established. One of the features in a portal that can assist Bioinformatics researchers in performing their work is the ability of the portal to notify. Notification system is a combination of software and hardware to provide a method of distributing message(s) to a set of recipients. The notification messages could assist the recipients in many ways, from time saving and cost saving till life saving. Notification system can be developed with numerous functions depending on the needs and one of the most beneficial functions in research area is notification on the next most relevant knowledge to be read. This type of notification could lessen the researchers’ time in finding the correct thus enhancing their research efficiency. Another type of notification that could assist researchers is the events reminder. Busy and hectic researchers could forget their pack schedule and put total focus on their research while the reminder prompts them when it is time through Windows’ pop up, email and SMS as the means of delivering the messages. Whilst the knowledge management system (KMS) provides a sturdy basic for the Bioinformatics portal as a whole, the agent technology support the operation of the notification system. Agent technology offers great capabilities in ensuring the recipients is notified accordingly through its autonomous, learning and cooperative characteristics. The objective of this project is to build a notification system for Bioinformatics community of practice (CoP). Researchers in this community could utilize this system to make their research process more efficient

    Bioinformatics for precision medicine in oncology: principles and application to the SHIVA clinical trial

    Get PDF
    Precision medicine (PM) requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of (i) warranting the integration and the traceability of data, (ii) ensuring the correct processing and analyses of genomic data, and (iii) applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application

    TOLKIN – Tree of Life Knowledge and Information Network: Filling a Gap for Collaborative Research in Biological Systematics

    Get PDF
    The development of biological informatics infrastructure capable of supporting growing data management and analysis environments is an increasing need within the systematics biology community. Although significant progress has been made in recent years on developing new algorithms and tools for analyzing and visualizing large phylogenetic data and trees, implementation of these resources is often carried out by bioinformatics experts, using one-off scripts. Therefore, a gap exists in providing data management support for a large set of non-technical users. The TOLKIN project (Tree of Life Knowledge and Information Network) addresses this need by supporting capabilities to manage, integrate, and provide public access to molecular, morphological, and biocollections data and research outcomes through a collaborative, web application. This data management framework allows aggregation and import of sequences, underlying documentation about their source, including vouchers, tissues, and DNA extraction. It combines features of LIMS and workflow environments by supporting management at the level of individual observations, sequences, and specimens, as well as assembly and versioning of data sets used in phylogenetic inference. As a web application, the system provides multi-user support that obviates current practices of sharing data sets as files or spreadsheets via email

    Agents in Bioinformatics

    No full text
    The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarise and reflect on the presentations and discussions

    Cloud Bioinformatics in a private cloud deployment

    No full text

    myTea: Connecting the Web to Digital Science on the Desktop

    No full text
    Bioinformaticians regularly access the hundreds of databases and tools that are available to them on the Web. None of these tools communicate with each other, causing the scientist to copy results manually from a Web site into a spreadsheet or word processor. myGrids' Taverna has made it possible to create templates (workflows) that automatically run searches using these databases and tools, cutting down what previously took days of work into hours, and enabling the automated capture of experimental details. What is still missing in the capture process, however, is the details of work done on that material once it moves from the Web to the desktop: if a scientist runs a process on some data, there is nothing to record why that action was taken; it is likewise not easy to publish a record of this process back to the community on the Web. In this paper, we present a novel interaction framework, built on Semantic Web technologies, and grounded in usability design practice, in particular the Making Tea method. Through this work, we introduce a new model of practice designed specifically to (1) support the scientists' interactions with data from the Web to the desktop, (2) provide automatic annotation of process to capture what has previously been lost and (3) associate provenance services automatically with that data in order to enable meaningful interrogation of the process and controlled sharing of the results
    corecore