14,869 research outputs found

    A NeISS collaboration to develop and use e-infrastructure for large-scale social simulation

    Get PDF
    The National e-Infrastructure for Social Simulation (NeISS) project is focused on developing e-Infrastructure to support social simulation research. Part of NeISS aims to provide an interface for running contemporary dynamic demographic social simulation models as developed in the GENESIS project. These GENESIS models operate at the individual person level and are stochastic. This paper focuses on support for a simplistic demographic change model that has a daily time steps, and is typically run for a number of years. A portal based Graphical User Interface (GUI) has been developed as a set of standard portlets. One portlet is for specifying model parameters and setting a simulation running. Another is for comparing the results of different simulation runs. Other portlets are for monitoring submitted jobs and for interfacing with an archive of results. A layer of programs enacted by the portlets stage data in and submit jobs to a Grid computer which then runs a specific GENESIS model program executable. Once a job is submitted, some details are communicated back to a job monitoring portlet. Once the job is completed, results are stored and made available for download and further processing. Collectively we call the system the Genesis Simulator. Progress in the development of the Genesis Simulator was presented at the UK e- Science All Hands Meeting in September 2011 by way of a video based demonstration of the GUI, and an oral presentation of a working paper. Since then, an automated framework has been developed to run simulations for a number of years in yearly time steps. The demographic models have also been improved in a number of ways. This paper summarises the work to date, presents some of the latest results and considers the next steps we are planning in this work

    Where are your Manners? Sharing Best Community Practices in the Web 2.0

    Get PDF
    The Web 2.0 fosters the creation of communities by offering users a wide array of social software tools. While the success of these tools is based on their ability to support different interaction patterns among users by imposing as few limitations as possible, the communities they support are not free of rules (just think about the posting rules in a community forum or the editing rules in a thematic wiki). In this paper we propose a framework for the sharing of best community practices in the form of a (potentially rule-based) annotation layer that can be integrated with existing Web 2.0 community tools (with specific focus on wikis). This solution is characterized by minimal intrusiveness and plays nicely within the open spirit of the Web 2.0 by providing users with behavioral hints rather than by enforcing the strict adherence to a set of rules.Comment: ACM symposium on Applied Computing, Honolulu : \'Etats-Unis d'Am\'erique (2009

    A framework for design engineering education in a global context

    Get PDF
    This paper presents a framework for teaching design engineering in a global context using innovative technologies to enable distributed teams to work together effectively across international and cultural boundaries. The DIDET Framework represents the findings of a 5-year project conducted by the University of Strathclyde, Stanford University and Olin College which enhanced student learning opportunities by enabling them to partake in global, team based design engineering projects, directly experiencing different cultural contexts and accessing a variety of digital information sources via a range of innovative technology. The use of innovative technology enabled the formalization of design knowledge within international student teams as did the methods that were developed for students to store, share and reuse information. Coaching methods were used by teaching staff to support distributed teams and evaluation work on relevant classes was carried out regularly to allow ongoing improvement of learning and teaching and show improvements in student learning. Major findings of the 5 year project include the requirement to overcome technological, pedagogical and cultural issues for successful eLearning implementations. The DIDET Framework encapsulates all the conclusions relating to design engineering in a global context. Each of the principles for effective distributed design learning is shown along with relevant findings and suggested metrics. The findings detailed in the paper were reached through a series of interventions in design engineering education at the collaborating institutions. Evaluation was carried out on an ongoing basis and fed back into project development, both on the pedagogical and the technological approaches

    The Research Object Suite of Ontologies: Sharing and Exchanging Research Data and Methods on the Open Web

    Get PDF
    Research in life sciences is increasingly being conducted in a digital and online environment. In particular, life scientists have been pioneers in embracing new computational tools to conduct their investigations. To support the sharing of digital objects produced during such research investigations, we have witnessed in the last few years the emergence of specialized repositories, e.g., DataVerse and FigShare. Such repositories provide users with the means to share and publish datasets that were used or generated in research investigations. While these repositories have proven their usefulness, interpreting and reusing evidence for most research results is a challenging task. Additional contextual descriptions are needed to understand how those results were generated and/or the circumstances under which they were concluded. Because of this, scientists are calling for models that go beyond the publication of datasets to systematically capture the life cycle of scientific investigations and provide a single entry point to access the information about the hypothesis investigated, the datasets used, the experiments carried out, the results of the experiments, the people involved in the research, etc. In this paper we present the Research Object (RO) suite of ontologies, which provide a structured container to encapsulate research data and methods along with essential metadata descriptions. Research Objects are portable units that enable the sharing, preservation, interpretation and reuse of research investigation results. The ontologies we present have been designed in the light of requirements that we gathered from life scientists. They have been built upon existing popular vocabularies to facilitate interoperability. Furthermore, we have developed tools to support the creation and sharing of Research Objects, thereby promoting and facilitating their adoption.Comment: 20 page

    Utilising Provenance to Enhance Social Computation

    Get PDF
    Postprin

    Wikis in scholarly publishing

    Get PDF
    Scientific research is a process concerned with the creation, collective accumulation, contextualization, updating and maintenance of knowledge. Wikis provide an environment that allows to collectively accumulate, contextualize, update and maintain knowledge in a coherent and transparent fashion. Here, we examine the potential of wikis as platforms for scholarly publishing. In the hope to stimulate further discussion, the article itself was drafted on "Species-ID":http://species-id.net/w/index.php?title=Wikis_in_scholarly_publishing&oldid=3815 - a wiki that hosts a prototype for wiki-based scholarly publishing - where it can be updated, expanded or otherwise improved

    Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture

    Full text link
    Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social-technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the "anyone can edit" encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedia's bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies, critical data studies, and fairness, accountability, and transparency in machine learning -- most principally arguing that scholarship and practice must go beyond trying to "open up the black box" of such systems and also examine sociocultural processes like newcomer socialization.Comment: 14 pages, typo fixed in v

    A Linked Data Approach to Sharing Workflows and Workflow Results

    No full text
    A bioinformatics analysis pipeline is often highly elaborate, due to the inherent complexity of biological systems and the variety and size of datasets. A digital equivalent of the ‘Materials and Methods’ section in wet laboratory publications would be highly beneficial to bioinformatics, for evaluating evidence and examining data across related experiments, while introducing the potential to find associated resources and integrate them as data and services. We present initial steps towards preserving bioinformatics ‘materials and methods’ by exploiting the workflow paradigm for capturing the design of a data analysis pipeline, and RDF to link the workflow, its component services, run-time provenance, and a personalized biological interpretation of the results. An example shows the reproduction of the unique graph of an analysis procedure, its results, provenance, and personal interpretation of a text mining experiment. It links data from Taverna, myExperiment.org, BioCatalogue.org, and ConceptWiki.org. The approach is relatively ‘light-weight’ and unobtrusive to bioinformatics users

    Manual SEAMLESS-IF

    Get PDF

    Bots in Wikipedia: Unfolding their duties

    Get PDF
    The success of crowdsourcing systems such as Wikipedia relies on people participating in these systems. However, in this research we reveal to what extent human and machine intelligence is combined to carry out semi-automatic workflows of complex tasks. In Wikipedia, bots are used to realize such combination of human-machine intelligence. We provide an extensive overview on various edit types bots carry out in this regard through the analysis of 1,639 approved task requests. We classify existing tasks by an action-object-pair structure and reveal existing differences in their probability of occurrence depending on the investigated work context. In the context of community services, bots mainly create reports, whereas in the area of guidelines or policies bots are mostly responsible for adding templates to pages. Moreover, the analysis of existing bot tasks revealed insights that suggest general reasons, why Wikipedia’s editor community uses bots as well as approaches, how they organize machine tasks to provide a sustainable service. We conclude by discussing how these insights can prepare the foundation for further research
    • 

    corecore