9,328 research outputs found

    The Semantic Automated Discovery and Integration (SADI) Web service Design-Pattern, API and Reference Implementation

    Get PDF
    Background. 
The complexity and inter-related nature of biological data poses a difficult challenge for data and tool integration. There has been a proliferation of interoperability standards and projects over the past decade, none of which has been widely adopted by the bioinformatics community. Recent attempts have focused on the use of semantics to assist integration, and Semantic Web technologies are being welcomed by this community.

Description. 
SADI – Semantic Automated Discovery and Integration – is a lightweight set of fully standards-compliant Semantic Web service design patterns that simplify the publication of services of the type commonly found in bioinformatics and other scientific domains. Using Semantic Web technologies at every level of the Web services “stack”, SADI services consume and produce instances of OWL Classes following a small number of very straightforward best-practices. In addition, we provide codebases that support these best-practices, and plug-in tools to popular developer and client software that dramatically simplify deployment of services by providers, and the discovery and utilization of those services by their consumers.

Conclusions.
SADI Services are fully compliant with, and utilize only foundational Web standards; are simple to create and maintain for service providers; and can be discovered and utilized in a very intuitive way by biologist end-users. In addition, the SADI design patterns significantly improve the ability of software to automatically discover appropriate services based on user-needs, and automatically chain these into complex analytical workflows. We show that, when resources are exposed through SADI, data compliant with a given ontological model can be automatically gathered, or generated, from these distributed, non-coordinating resources - a behavior we have not observed in any other Semantic system. Finally, we show that, using SADI, data dynamically generated from Web services can be explored in a manner very similar to data housed in static triple-stores, thus facilitating the intersection of Web services and Semantic Web technologies

    Agents in Bioinformatics

    No full text
    The scope of the Technical Forum Group (TFG) on Agents in Bioinformatics (BIOAGENTS) was to inspire collaboration between the agent and bioinformatics communities with the aim of creating an opportunity to propose a different (agent-based) approach to the development of computational frameworks both for data analysis in bioinformatics and for system modelling in computational biology. During the day, the participants examined the future of research on agents in bioinformatics primarily through 12 invited talks selected to cover the most relevant topics. From the discussions, it became clear that there are many perspectives to the field, ranging from bio-conceptual languages for agent-based simulation, to the definition of bio-ontology-based declarative languages for use by information agents, and to the use of Grid agents, each of which requires further exploration. The interactions between participants encouraged the development of applications that describe a way of creating agent-based simulation models of biological systems, starting from an hypothesis and inferring new knowledge (or relations) by mining and analysing the huge amount of public biological data. In this report we summarise and reflect on the presentations and discussions

    Pemilihan kerjaya di kalangan pelajar aliran perdagangan sekolah menengah teknik : satu kajian kes

    Get PDF
    This research is a survey to determine the career chosen of form four student in commerce streams. The important aspect of the career chosen has been divided into three, first is information about career, type of career and factor that most influence students in choosing a career. The study was conducted at Sekolah Menengah Teknik Kajang, Selangor Darul Ehsan. Thirty six form four students was chosen by using non-random sampling purpose method as respondent. All information was gather by using questionnaire. Data collected has been analyzed in form of frequency, percentage and mean. Results are performed in table and graph. The finding show that information about career have been improved in students career chosen and mass media is the main factor influencing students in choosing their career

    Investigating biocomplexity through the agent-based paradigm.

    Get PDF
    Capturing the dynamism that pervades biological systems requires a computational approach that can accommodate both the continuous features of the system environment as well as the flexible and heterogeneous nature of component interactions. This presents a serious challenge for the more traditional mathematical approaches that assume component homogeneity to relate system observables using mathematical equations. While the homogeneity condition does not lead to loss of accuracy while simulating various continua, it fails to offer detailed solutions when applied to systems with dynamically interacting heterogeneous components. As the functionality and architecture of most biological systems is a product of multi-faceted individual interactions at the sub-system level, continuum models rarely offer much beyond qualitative similarity. Agent-based modelling is a class of algorithmic computational approaches that rely on interactions between Turing-complete finite-state machines--or agents--to simulate, from the bottom-up, macroscopic properties of a system. In recognizing the heterogeneity condition, they offer suitable ontologies to the system components being modelled, thereby succeeding where their continuum counterparts tend to struggle. Furthermore, being inherently hierarchical, they are quite amenable to coupling with other computational paradigms. The integration of any agent-based framework with continuum models is arguably the most elegant and precise way of representing biological systems. Although in its nascence, agent-based modelling has been utilized to model biological complexity across a broad range of biological scales (from cells to societies). In this article, we explore the reasons that make agent-based modelling the most precise approach to model biological systems that tend to be non-linear and complex

    Regional science policy and the growth of knowledge megacentres in bioscience clusters

    Get PDF
    Changes in epistemology in biosciences are generating important spatial effects. The most notable of these is the emergence of a few Bioscience Megacentres of basic and applied bioscience (molecular, post-genomic, proteomics, etc.) medical and clinical research, biotechnology research, training in these and related fields, academic entrepreneurship and commercial exploitation by clusters of drug discovery start-up and spin-off companies, along with specialist venture capital and other innovation system support services. Large pharmaceutical firms that used to lead such knowledge generation and exploitation processes are becoming increasingly dependent upon innovative drug solutions produced in such clusters, and Megacentres are now the predominant source of such commercial knowledge. Big pharma is seldom at the heart of Megacentres such as those the paper will argue are found in about four locations each in the USA and Europe, but remains important for some risk capital (milestone payments), marketing and distribution of drugs discovered. The reasons for this shift (which is also spatial to some extent) are as follows: first, bioscientific research requires the formation of collaboratory relationships among hitherto cognitively dissonant disciplines molecular biology, combinatorial chemistry, high throughput screening, genomics, proteomics and bioinformatics to name a few. Second, the canonical chance discovery model of bioscientific research is being replaced by rational drug design based on those technologies because of the need massively to reduce search costs and delivery timeframes. Third, the US and to some extent European 'Crusade against Cancer' and other pathologies has seen major increases in basic research budgets (e.g. to 27.3billionin2003fortheUSNationalInstitutesofHealth)andfoundationexpenditure(e.g.27.3 billion in 2003 for the US National Institutes of Health) and foundation expenditure (e.g. 1billion in 2003 by the UK's Wellcome Trust; $1 billion approximately by the top ten US medical foundations, and a comparable sum from corporate foundations). Each of these tendencies weakens the knowledge generation role of 'big pharma'and strengthens that of Megacentres. But the process also creates major, new regional disparities, which some regional governances have recognised, causing them to develop responsibilities for regional science policy and funding to offset spatial biases intrinsic in traditional national (and in the EU, supranational) research funding regimes. Responses follow a variety of models ranging from market following to both regionalised (decentralising by the centre) and regionalist (ground-up), but in each case the role of Megacentres is justified in health terms. But their role in assisting fulfilment of regional economic growth visions is also clearly perceived and pronounced in policy terms.

    Data mining and fusion

    No full text

    Mining Representative Unsubstituted Graph Patterns Using Prior Similarity Matrix

    Full text link
    One of the most powerful techniques to study protein structures is to look for recurrent fragments (also called substructures or spatial motifs), then use them as patterns to characterize the proteins under study. An emergent trend consists in parsing proteins three-dimensional (3D) structures into graphs of amino acids. Hence, the search of recurrent spatial motifs is formulated as a process of frequent subgraph discovery where each subgraph represents a spatial motif. In this scope, several efficient approaches for frequent subgraph discovery have been proposed in the literature. However, the set of discovered frequent subgraphs is too large to be efficiently analyzed and explored in any further process. In this paper, we propose a novel pattern selection approach that shrinks the large number of discovered frequent subgraphs by selecting the representative ones. Existing pattern selection approaches do not exploit the domain knowledge. Yet, in our approach we incorporate the evolutionary information of amino acids defined in the substitution matrices in order to select the representative subgraphs. We show the effectiveness of our approach on a number of real datasets. The results issued from our experiments show that our approach is able to considerably decrease the number of motifs while enhancing their interestingness

    Towards technological rules for designing innovation networks: a dynamic capabilities view.

    No full text
    Inter-organizational innovation networks provide opportunities to exploit complementary resources that reside beyond the boundary of the firm. The shifting locus of innovation and value creation away from the “sole firm as innovator” poses important questions about the nature of these resources and the capabilities needed to leverage them for competitive advantage. The purpose of this paper is to describe research into producing design-oriented knowledge, for configuring inter-organizational networks as a means of accessing such resources for innovation
    • 

    corecore