2,556 research outputs found

    Benefitting from the Grey Literature in Software Engineering Research

    Full text link
    Researchers generally place the most trust in peer-reviewed, published information, such as journals and conference papers. By contrast, software engineering (SE) practitioners typically do not have the time, access or expertise to review and benefit from such publications. As a result, practitioners are more likely to turn to other sources of information that they trust, e.g., trade magazines, online blog-posts, survey results or technical reports, collectively referred to as Grey Literature (GL). Furthermore, practitioners also share their ideas and experiences as GL, which can serve as a valuable data source for research. While GL itself is not a new topic in SE, using, benefitting and synthesizing knowledge from the GL in SE is a contemporary topic in empirical SE research and we are seeing that researchers are increasingly benefitting from the knowledge available within GL. The goal of this chapter is to provide an overview to GL in SE, together with insights on how SE researchers can effectively use and benefit from the knowledge and evidence available in the vast amount of GL

    Supporting exploratory browsing with visualization of social interaction history

    Get PDF
    This thesis is concerned with the design, development, and evaluation of information visualization tools for supporting exploratory browsing. Information retrieval (IR) systems currently do not support browsing well. Responding to user queries, IR systems typically compute relevance scores of documents and then present the document surrogates to users in order of relevance. Other systems such as email clients and discussion forums simply arrange messages in reverse chronological order. Using these systems, people cannot gain an overview of a collection easily, nor do they receive adequate support for finding potentially useful items in the collection. This thesis explores the feasibility of using social interaction history to improve exploratory browsing. Social interaction history refers to traces of interaction among users in an information space, such as discussions that happen in the blogosphere or online newspapers through the commenting facility. The basic hypothesis of this work is that social interaction history can serve as a good indicator of the potential value of information items. Therefore, visualization of social interaction history would offer navigational cues for finding potentially valuable information items in a collection. To test this basic hypothesis, I conducted three studies. First, I ran statistical analysis of a social media data set. The results showed that there were positive relationships between traces of social interaction and the degree of interestingness of web articles. Second, I conducted a feasibility study to collect initial feedback about the potential of social interaction history to support information exploration. Comments from the participants were in line with the research hypothesis. Finally, I conducted a summative evaluation to measure how well visualization of social interaction history can improve exploratory browsing. The results showed that visualization of social interaction history was able to help users find interesting articles, to reduce wasted effort, and to increase user satisfaction with the visualization tool

    DARIAH and the Benelux

    Get PDF

    Ant Spider Bee: Chronicling Digital Transformations in Environmental Humanities

    Get PDF

    Blogs as Infrastructure for Scholarly Communication.

    Full text link
    This project systematically analyzes digital humanities blogs as an infrastructure for scholarly communication. This exploratory research maps the discourses of a scholarly community to understand the infrastructural dynamics of blogs and the Open Web. The text contents of 106,804 individual blog posts from a corpus of 396 blogs were analyzed using a mix of computational and qualitative methods. Analysis uses an experimental methodology (trace ethnography) combined with unsupervised machine learning (topic modeling), to perform an interpretive analysis at scale. Methodological findings show topic modeling can be integrated with qualitative and interpretive analysis. Special attention must be paid to data fitness, or the shape and re-shaping practices involved with preparing data for machine learning algorithms. Quantitative analysis of computationally generated topics indicates that while the community writes about diverse subject matter, individual scholars focus their attention on only a couple of topics. Four categories of informal scholarly communication emerged from the qualitative analysis: quasi-academic, para-academic, meta-academic, and extra-academic. The quasi and para-academic categories represent discourse with scholarly value within the digital humanities community, but do not necessarily have an obvious path into formal publication and preservation. A conceptual model, the (in)visible college, is introduced for situating scholarly communication on blogs and the Open Web. An (in)visible college is a kind of scholarly communication that is informal, yet visible at scale. This combination of factors opens up a new space for the study of scholarly communities and communication. While (in)invisible colleges are programmatically observable, care must be taken with any effort to count and measure knowledge work in these spaces. This is the first systematic, data driven analysis of the digital humanities and lays the groundwork for subsequent social studies of digital humanities.PhDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111592/1/mcburton_1.pd

    The Business Value of Social Network Technologies: A Framework for Identifying Opportunities for Business Value and an Emerging Research Program

    Get PDF
    Although social network technologies have been the focus of many articles in the popular and business press, businesses remain unclear about their value. We use theory and data gathered from IT leaders to develop an initial model assessing the value of social network technologies in the business environment. Insights are given into when different features should be used to enhance existing business processes and to provide business value

    Air traffic flow management regulations: big data analytics

    Get PDF
    Air traffic in Europe is constantly increasing. Due to this, Air Traffic Management is getting more complex and all stakeholders get affected by that. Among these, air traffic controllers are the ones that suffer the biggest impact in terms of overload of work. Every day, a set of regulations occurs in the regions controlled by these operators, which provokes delays on ground and rerouting in mid-air. All of these variations directly affect the entire ATM network and translates into big expenses for passengers and airlines. With this project, the aim is to predict these daily contingencies by using big data analysis models, so that costs associated are reduced. Most of the information needed to run the analysis has been very complicated to extract, process and correlate because the data sources are not open to researchers. Therefore, the number of instances available for the prediction is very low (only 18 months of data). Nevertheless, while working with this limitation, a Naive Bayes classifier has been chosen as the analytical algorithm. In terms of results, the work done does not reveal a high predictive capability due to the amount of data acquired and the simplicity of the temporal variables. This suggests that, in future researches, it could be convenient to intake broader historical data (more years). Moreover, more complex predictive models could be implemented if variables coming from the weather or the number of flights are used.Ingeniería Aeroespacial (Plan 2010

    Encoding the haunting of an object catalogue: on the potential of digital technologies to perpetuate or subvert the silence and bias of the early-modern archive

    Get PDF
    The subjectivities that shape data collection and management have received extensive criticism, especially with regards to the digitisation projects and digital archives of GLAM institutions. The role of digital methods for recovering data absences is increasingly receiving attention too. Conceptualising the absence of non-hegemonic individuals from the catalogues of Sir Hans Sloane as an instance of textual haunting, this paper will ask: to what extent do data-driven approaches further entrench archival absences and silences? Can digital approaches be used to highlight or recover absent data? This paper will give a decisive overview of relevant literature and projects so as to examine how digital tools are being realigned to recover, or more modestly acknowledge, the vast, undocumented network of individuals who have been omitted from canonical histories. Drawing on the example of Sloane, this paper will reiterate the importance of a more rigorous ethics of digital practice, and propose recommendations for the management and representation of historical data, so cultural heritage institutions and digital humanists may better inform users of the absences and subjectivities that shape digital datasets and archives. This article is built on a comprehensive survey of digital humanities’ current algorithmic approaches to absence and bias. It also presents reflections on how we, the authors, grappled with unforeseen questions of absence and bias during a Leverhulme-funded collaboration between the British Museum and UCL, entitled ‘Enlightenment Architectures: Sir Hans Sloane’s Catalogues of his collections’
    • …
    corecore