310 research outputs found

    Wikis in Tuple Spaces

    Full text link
    We consider storing the pages of a wiki in a tuple space and the effects this might have on the wiki experience. In particular, wiki pages are stored in tuples with a few identifying values such as title, author, revision date, content, etc. and pages are retrieved by sending the tuple space templates, such as one that gives the title but nothing else, leaving the tuple space to resolve to a single tuple. We use a tuple space wiki to avoid deadlocks, infinite loops, and wasted efforts when page edit contention arises and examine how a tuple space wiki changes the wiki experience.Comment: To appear at WMSCI 200

    Towards Knowledge in the Cloud

    Get PDF
    Knowledge in the form of semantic data is becoming more and more ubiquitous, and the need for scalable, dynamic systems to support collaborative work with such distributed, heterogeneous knowledge arises. We extend the “data in the cloud” approach that is emerging today to “knowledge in the cloud”, with support for handling semantic information, organizing and finding it efficiently and providing reasoning and quality support. Both the life sciences and emergency response fields are identified as strong potential beneficiaries of having ”knowledge in the cloud”

    Challenges in Bridging Social Semantics and Formal Semantics on the Web

    Get PDF
    This paper describes several results of Wimmics, a research lab which names stands for: web-instrumented man-machine interactions, communities, and semantics. The approaches introduced here rely on graph-oriented knowledge representation, reasoning and operationalization to model and support actors, actions and interactions in web-based epistemic communities. The re-search results are applied to support and foster interactions in online communities and manage their resources

    Harvesting models from web 2.0 databases

    Get PDF
    International audienceData rather than functionality are the sources of competitive advantage for Web2.0 applications such as wikis, blogs and social networking websites. This valuable information might need to be capitalized by third-party applications or be subject to migration or data analysis. Model-Driven Engineering (MDE) can be used for these purposes. However, MDE ïŹrst requires obtaining models from the wiki/blog/website database (a.k.a. model harvesting). This can be achieved through SQL scripts embedded in a program. However, this approach leads to laborious code that exposes the iterations and table joins that serve to build the model. By contrast, a Domain-SpeciïŹc Language (DSL) can hide these "how" concerns, leaving the designer to focus on the "what", i.e. the mapping of database schemas to model classes. This paper introduces Schemol, a DSL tailored for extracting models out of databases which considers Web2.0 specifics. Web2.0 applications are often built on top of general frameworks (a.k.a. engines) that set the database schema (e.g.,MediaWiki, Blojsom). Hence, table names offer little help in automating the extraction process. In addition, Web2.0 data tend to be annotated. User-provided data (e.g., wiki articles, blog entries) might contain semantic markups which provide helpful hints for model extraction. Unfortunately, these data end up being stored as opaque strings. Therefore, there exists a considerable conceptual gap between the source database and the target metamodel. Schemol offers extractive functions and view-like mechanisms to confront these issues. Examples using Blojsom as the blog engine are available for download

    An Introduction to Programming for Bioscientists: A Python-based Primer

    Full text link
    Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in the biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a 'variable', the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.Comment: 65 pages total, including 45 pages text, 3 figures, 4 tables, numerous exercises, and 19 pages of Supporting Information; currently in press at PLOS Computational Biolog

    Closing Information Gaps with Need-driven Knowledge Sharing

    Get PDF
    InformationslĂŒcken schließen durch bedarfsgetriebenen Wissensaustausch Systeme zum asynchronen Wissensaustausch – wie Intranets, Wikis oder Dateiserver – leiden hĂ€ufig unter mangelnden NutzerbeitrĂ€gen. Ein Hauptgrund dafĂŒr ist, dass Informationsanbieter von Informationsuchenden entkoppelt, und deshalb nur wenig ĂŒber deren Informationsbedarf gewahr sind. Zentrale Fragen des Wissensmanagements sind daher, welches Wissen besonders wertvoll ist und mit welchen Mitteln WissenstrĂ€ger dazu motiviert werden können, es zu teilen. Diese Arbeit entwirft dazu den Ansatz des bedarfsgetriebenen Wissensaustauschs (NKS), der aus drei Elementen besteht. ZunĂ€chst werden dabei Indikatoren fĂŒr den Informationsbedarf erhoben – insbesondere Suchanfragen – ĂŒber deren Aggregation eine fortlaufende Prognose des organisationalen Informationsbedarfs (OIN) abgeleitet wird. Durch den Abgleich mit vorhandenen Informationen in persönlichen und geteilten InformationsrĂ€umen werden daraus organisationale InformationslĂŒcken (OIG) ermittelt, die auf fehlende Informationen hindeuten. Diese LĂŒcken werden mit Hilfe so genannter Mediationsdienste und MediationsrĂ€ume transparent gemacht. Diese helfen Aufmerksamkeit fĂŒr organisationale InformationsbedĂŒrfnisse zu schaffen und den Wissensaustausch zu steuern. Die konkrete Umsetzung von NKS wird durch drei unterschiedliche Anwendungen illustriert, die allesamt auf bewĂ€hrten Wissensmanagementsystemen aufbauen. Bei der Inversen Suche handelt es sich um ein Werkzeug das WissenstrĂ€gern vorschlĂ€gt Dokumente aus ihrem persönlichen Informationsraum zu teilen, um damit organisationale InformationslĂŒcken zu schließen. Woogle erweitert herkömmliche Wiki-Systeme um Steuerungsinstrumente zur Erkennung und Priorisierung fehlender Informationen, so dass die Weiterentwicklung der Wiki-Inhalte nachfrageorientiert gestaltet werden kann. Auf Ă€hnliche Weise steuert Semantic Need, eine Erweiterung fĂŒr Semantic MediaWiki, die Erfassung von strukturierten, semantischen Daten basierend auf Informationsbedarf der in Form strukturierter Anfragen vorliegt. Die Umsetzung und Evaluation der drei Werkzeuge zeigt, dass bedarfsgetriebener Wissensaustausch technisch realisierbar ist und eine wichtige ErgĂ€nzung fĂŒr das Wissensmanagement sein kann. DarĂŒber hinaus bietet das Konzept der Mediationsdienste und MediationsrĂ€ume einen Rahmen fĂŒr die Analyse und Gestaltung von Werkzeugen gemĂ€ĂŸ der NKS-Prinzipien. Schließlich liefert der hier vorstellte Ansatz auch Impulse fĂŒr die Weiterentwicklung von Internetdiensten und -Infrastrukturen wie der Wikipedia oder dem Semantic Web

    Seamlessly Editing the Web

    Get PDF
    The typical process of editing content on the web is strongly moded. Authors are forced to switch between editing and previewing and publishing modes before, during, and after the editing process. This thesis explores a new paradigm of editing content on the web called seamless editing. Unlike existing techniques for editing content on the web, seamless editing is modeless, enabling authors to directly edit content on web pages without the need to switch between any modes. The absence of modes reduces the amount of cognitive complexity involved with the editing process. A software framework called Seaweed was developed for providing seamlessly editable web pages in any common web browser, and is shown that it can be integrated into any content management system. For the purposes of experimentation, the content management system WordPress was selected, and a plugin using the Seaweed framework developed for it that provided a seamlessly editable environment. Two experiments were conducted. The first study observed users with no or minimal experience with using WordPress, following a set of prescribed tasks, both with and without the plugin. The second study was conducted over a longer time period in a real-world context, where existing WordPress users were naturally observed using the plugin within their own blogs. Analysis of logged interactions and pre-questionnaires and post-questionnaires showed that, in both studies, the participants found the Seaweed software to be intuitive and the new way of editing content to be easily adaptable. Additionally, the analysis showed that the participants found the concept of seamless editing to be useful, and could see it being useful in many other contexts, other than blogs

    A transformational creativity tool to support chocolate designers

    Get PDF
    A new formulation of the central ideas of Boden's well-established theory on combinational, exploratory and transformational creativity is presented. This new formulation, based on the idea of conceptual space, redefines some terms and includes several types of concept properties (appropriateness and relevance), whose relationship facilitates the computational implementation of the transformational creativity mechanism. The presented formulation is applied to a real case of chocolate designing in which a novel and flavorful combination of chocolate and fruit is generated. The experimentation was conducted jointly with a Spanish chocolate chef. Experimental results prove the relationship between appropriateness and relevance in different frameworks and show that the formulation presented is not only useful for understanding how the creative mechanisms of design works but also facilitates its implementation in real cases to support creativity processes.Postprint (author's final draft

    Neutrosophic causal modeling for analyzing the diffusion of the institutional culture: the case UNIANDES

    Get PDF
    The dissemination of culture is an activity assumed by teaching institutions. Knowing the cultural elements that characterize each nation allows to preserve the cultural heritage. However, quantifying this result represents a complex task to perform. This research proposes a solution to the problem posed with the design of a multicriteria method for the evaluation of cultural diffusion. The method uses neutrosophical numbers to model uncertainty. The proposal introduced the results in UNIANDES, Ibarra and it was found that it has a high rate of cultural diffusion

    Learning-Assisted Automated Reasoning with Flyspeck

    Full text link
    The considerable mathematical knowledge encoded by the Flyspeck project is combined with external automated theorem provers (ATPs) and machine-learning premise selection methods trained on the proofs, producing an AI system capable of answering a wide range of mathematical queries automatically. The performance of this architecture is evaluated in a bootstrapping scenario emulating the development of Flyspeck from axioms to the last theorem, each time using only the previous theorems and proofs. It is shown that 39% of the 14185 theorems could be proved in a push-button mode (without any high-level advice and user interaction) in 30 seconds of real time on a fourteen-CPU workstation. The necessary work involves: (i) an implementation of sound translations of the HOL Light logic to ATP formalisms: untyped first-order, polymorphic typed first-order, and typed higher-order, (ii) export of the dependency information from HOL Light and ATP proofs for the machine learners, and (iii) choice of suitable representations and methods for learning from previous proofs, and their integration as advisors with HOL Light. This work is described and discussed here, and an initial analysis of the body of proofs that were found fully automatically is provided
    • 

    corecore