65 research outputs found

    Mapping e-Science’s Path in the Collaboration Space: Ontological Approach to Monitoring Infrastructure Development

    Get PDF
    In an undertaking such as the U.S. Cyberinfrastructure Initiative, or the UK e-science program, which span many years and comprise a great many projects funded by multiple agencies, it can be very difficult to keep tabs on what everyone is doing. But, it is not impossible. In this paper, we propose the construction of ontologies as a means of monitoring a research program’s portfolio of projects. In particular, we introduce the “virtual laboratory ontology” (VLO) and show how its application to e-Science yields a mapping of the distribution of projects in several dimensions of the “collaboration space.” In this paper, we sketch out a method to induce a project mapping from project descriptions and present the resulting map for the UK e-science program. This paper shows the proposed mapping approach to be informative as well as feasible, and we expect that its further development can prove to be substantively useful for future work in cyber-infrastructure-building.e-Science, virtual laboratory ontology, collaboration space, project mapping, cyber-infrastructure building

    Collaborative Research in e-Science and Open Access to Information

    Get PDF
    This contribution examines various aspects of “openness” in research, and seeks to gauge the degree to which contemporary “e-science” practices are congruent with “open science.” Norms and practices of openness are vital for the work of modern scientific communities, but concerns about the growth of stronger technical and institutional restraints on access to research tools, data, and information recently have attracted notice—in part because of their implications for the effective utilization of advanced digital infrastructures and information technologies in research collaborations. Our discussion clarifies the conceptual differences between e-science and open science, and reports findings from a preliminary look at practices in U.K. e-science projects. Both parts serve to emphasize that it is unwarranted to presume that the development of e-science necessarily promotes global open science collaboration. Since there is evident need for further empirical research to establish where, when, and to what extent “openness” in scientific and engineering research may be expected to advance hand-in-hand, we outline a framework within which such a program of studies might be undertaken.e-Science, Open Science, Engineering Reserach

    Will e-Science Be Open Science?

    Get PDF
    This contribution examines various aspects of “openness” in research, and seeks to gauge the degree to which contemporary “e-science” practices are congruent with “open science.” Norms and practices of openness are vital for the work of modern scientific communities, but concerns about the growth of stronger technical and institutional restraints on access to research tools, data, and information recently have attracted notice—in part because of their implications for the effective utilization of advanced digital infrastructures and information technologies in research collaborations. Our discussion clarifies the conceptual differences between e-science and open science, and reports findings from a preliminary look at practices in U.K. e-science projects. Both parts serve to emphasize that it is unwarranted to presume that the development of e-science necessarily promotes global open science collaboration. Since there is evident need for further empirical research to establish where, when, and to the extent “openness” and "e-ness" in scientific and engineering research may be expected to advance hand-in-hand, we outline a framework within which such a program of studies might be undertaken.e-Science, Open Science, Engineering Reserach

    Voting for bugs in Firefox: a voice for Mom and Dad?

    No full text
    International audienceIn this paper, we present preliminary evidence suggesting that the voting mechanism implemented by the open-source Firefox community is a means to provide a supplementary voice to mainstream users. This evidence is drawn from a sample of bug-reports and from information on voters both found within the bug-tracking system (Bugzilla) for Firefox. Although voting is known to be a relatively common feature within the governance structure of many open-source communities, our paper suggests that it also plays a role as a bridge between the mainstream users in the periphery of the community and developers at the core: voters who do not participate in other activities within the community, the more peripheral, tend to vote for the more user-oriented Firefox module; moreover, bugs declared and first patched by members of the periphery and bug rather solved in “I” mode tend to receive more votes; meanwhile, more votes are associated with an increased involvement of core members of the community in the provision of patches, quite possibly as a consequence of the increased efforts and attention that the highly voted bugs attract from the core

    Environment design for emerging artificial societies

    Get PDF
    The NewTies project is developing a system in which societies of agents are expected to develop autonomously as a result of individual, population and social learning. These societies are expected to be able to solve the environmental challenges that they are set by acting collectively. The challenges are intended to be analogous to those faced by early, simple, small-scale human societies. Some issues in the construction of a virtual environment for the system are described and it is argued that multi-agent social simulation has so far tended to neglect the importance of environment design.agent-based modelling, stone age economics, economic anthropolgy

    Coordination, Division of Labor, and Open Content Communities: Template Messages in Wiki-Based Collections

    Get PDF
    In this paper we investigate how in commons based peer production a large community of contributors coordinates its efforts towards the production of high quality open content. We carry out our empirical analysis at the level of articles and focus on the dynamics surrounding their production. That is, we focus on the continuous process of revision and update due to the spontaneous and largely uncoordinated sequence of contributions by a multiplicity of individuals. We argue that this loosely regulated process, according to which any user can make changes to any entry, while allowing highly creative contributions, has to come into terms with potential issues with respect to the quality and consistency of the output. In this respect, we focus on emergent, bottom up organizational practice arising within the Wikipedia community, namely the use of template messages, which seems to act as an effective and parsimonious coordination device in emphasizing quality concerns (in terms of accuracy, consistency, completeness, fragmentation, and so on) or in highlighting the existence of other particular issues which are to be addressed. We focus on the template "NPOV" which signals breaches on the fundamental policy of neutrality of Wikipedia articles and we show how and to what extent imposing such template on a page affects the production process and changes the nature and division of labor among participants. We find that intensity of editing increases immediately after the "NPOV" template appears. Moreover, articles that are treated most successfully, in the sense that "NPOV" disappears again relatively soon, are those articles which receive the attention of a limited group of editors. In this dimension at least the distribution of tasks in Wikipedia looks quite similar to what is know about the distribution in the FLOSS development process

    COORDINATION BY REASSIGNMENT IN THE FIREFOX COMMUNITY

    Get PDF
    According to the so-called mirroring hypothesis , the structure of an organization tends to replicate the technical dependencies among the different components in the product (or service) that the organization is developing. An explanation for this phenomenon is that socio-technical alignment, which can be measured by the congrunce of technical dependencies and human relations (Cataldo et al., 2008), leads to more efficient coordination. In this context, we suggest that a key organizational capability, especially in fast-changing environments, is to quickly reorganize in response to new opportunities or simply in order to solve problems more efficiently. To back up our suggestion, we study the dynamics of congrunce between task dependencies and expert attention within the Firefox project, as reported to the Bugzilla bug tracking system. We identify in this database several networks of interrelated problems, known as bug report networks (Sandusky et al., 2004). We show that the ability to reassign bugs to other developers within each bug report network does indeed correlate positively with the average level of congrunce achieved on each bug report network. Furthermore, when bug report networks are grouped according to common experts, we find preliminary evidence that the relationship between congrunce and assignments could be different from one group to the other

    Coordination, Division of Labor, and Open Content Communities: Template Messages in Wiki-Based Collections.

    Get PDF
    In this paper we investigate how in commons based peer production a large community of contributors coordinates its efforts towards the production of high quality open content. We carry out our empirical analysis at the level of articles and focus on the dynamics surrounding their production. That is, we focus on the continuous process of revision and update due to the spontaneous and largely uncoordinated sequence of contributions by a multiplicity of individuals. We argue that this loosely regulated process, according to which any user can make changes to any entry, while allowing highly creative contributions, has to come into terms with potential issues with respect to the quality and consistency of the output. In this respect, we focus on emergent, bottom up organizational practice arising within the Wikipedia community, namely the use of template messages, which seems to act as an effective and parsimonious coordination device in emphasizing quality concerns (in terms of accuracy, consistency, completeness, fragmentation, and so on) or in highlighting the existence of other particular issues which are to be addressed. We focus on the template "NPOV" which signals breaches on the fundamental policy of neutrality of Wikipedia articles and we show how and to what extent imposing such template on a page affects the production process and changes the nature and division of labor among participants. We find that intensity of editing increases immediately after the "NPOV" template appears. Moreover, articles that are treated most successfully, in the sense that "NPOV" disappears again relatively soon, are those articles which receive the attention of a limited group of editors. In this dimension at least the distribution of tasks in Wikipedia looks quite similar to what is know about the distribution in the FLOSS development process.commons based peer production; wikipedia; wiki; survival analysis; quality; bug fixing; template messages; coordination

    Wikibugs: the practice of template messages in open content collections.

    Get PDF
    In the paper we investigate an organizational practice meant to increase the quality of commons-based peer production: the use of template messages in wiki collections to highlight editorial bugs and call for intervention. In the context of SimpleWiki, an online encyclopedia of the Wikipedia family, we focus on {complex}, a template which is used to flag articles disregarding the overall goals of simplicity and readability. We characterize how this template is placed on and removed from articles and we use survival analysis to study the emergence and successful treatment of these bugs in the collection.commons based peer production; wikipedia; wiki; survival analysis; quality; bug fixing; template messages; coordination

    Life Science Research and Drug Discovery at the Turn of the 21st Century: The Experience of SwissBioGrid

    Get PDF
    Background It is often said that the life sciences are transforming into an information science. As laboratory experiments are starting to yield ever increasing amounts of data and the capacity to deal with those data is catching up, an increasing share of scientific activity is seen to be taking place outside the laboratories, sifting through the data and modelling “in silico” the processes observed “in vitro.” The transformation of the life sciences and similar developments in other disciplines have inspired a variety of initiatives around the world to create technical infrastructure to support the new scientific practices that are emerging. The e-Science programme in the United Kingdom and the NSF Office for Cyberinfrastructure are examples of these. In Switzerland there have been no such national initiatives. Yet, this has not prevented scientists from exploring the development of similar types of computing infrastructures. In 2004, a group of researchers in Switzerland established a project, SwissBioGrid, to explore whether Grid computing technologies could be successfully deployed within the life sciences. This paper presents their experiences as a case study of how the life sciences are currently operating as an information science and presents the lessons learned about how existing institutional and technical arrangements facilitate or impede this operation. Results SwissBioGrid gave rise to two pilot projects: one for proteomics data analysis and the other for high-throughput molecular docking (“virtual screening”) to find new drugs for neglected diseases (specifically, for dengue fever). The proteomics project was an example of a data management problem, applying many different analysis algorithms to Terabyte-sized datasets from mass spectrometry, involving comparisons with many different reference databases; the virtual screening project was more a purely computational problem, modelling the interactions of millions of small molecules with a limited number of protein targets on the coat of the dengue virus. Both present interesting lessons about how scientific practices are changing when they tackle the problems of large-scale data analysis and data management by means of creating a novel technical infrastructure. Conclusions In the experience of SwissBioGrid, data intensive discovery has a lot to gain from close collaboration with industry and harnessing distributed computing power. Yet the diversity in life science research implies only a limited role for generic infrastructure; and the transience of support means that researchers need to integrate their efforts with others if they want to sustain the benefits of their success, which are otherwise lost
    • 

    corecore