268 research outputs found

    Literature Reviews in HCI: A Review of Reviews

    Get PDF
    This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI

    The relevance of prediction markets for corporate forecasting

    Get PDF
    Prediction markets (PMs) are virtual stock markets on which shares are traded taking advantage of the wisdom-of-crowds principle to access collective intelligence. It is claimed that the accumulation of information by groups leads to joint group decisions often better than individual participants’ approaches to solutions. A PM share represents a future event or a market condition (e.g. expected sales figures of a product for a specific month) and provides forecasts via its price which is interpreted as the probability of the event occurring. PMs can be used in competition with other forecasting tools; when applied for forecasting purposes within a company they are called corporate prediction markets (CPMs). Despite great praise in the (academic) literature for the use of PMs as an efficient instrument for bringing together scattered information and opinions, corporate usage and applications are limited. This research was directed towards an examination of this discrepancy by means of focusing on the barriers to adoption within enterprises. Literature and reality diverged and neglected the important aspect of corporate culture. Screening existing research and interviews with business executives and corporate planners revealed challenges of company hierarchy as an inhibitor to the acceptance of CPM outcomes. Findings from 55 interviews and a thematic analysis of the literature exposed that CPMs are useful but rarely used. Their lack of use arises from senior executives’ perception of the organisational hierarchy being taxed and fear of losing power as CPMs (can) include lower rungs of the corporate ladder in decision-making processes. If these challenges can be overcome the potential of CPMs can be released. It emerged – buttressed by ten additional interviews – that CPMs would be worthwhile for company forecasting, particularly supporting innovation management which would allow idea markets (as an embodiment of CPMs) to excel. A contribution of this research lies in its additions to the PM literature, explaining the lack of adoption of CPMs despite their apparent benefits and making a case for the incorporation of CPMs as a forecasting instrument to facilitate innovation management. Furthermore, a framework to understand decision-making in the adoption of strategic tools is provided. This framework permits tools to be accepted on a more rational base and curb the emotional and political influences which can act against the adoption of good and effective tools

    Literature Reviews in HCI: A Review of Reviews

    Get PDF
    This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI

    A systems thinking approach for modelling supply chain risk propagation

    Get PDF
    Supply Chain Risk Management (SCRM) is rapidly becoming a most sought after research area due to the influence of recent supply chain disruptions on global economy. The thesis begins with a systematic literature review of the developments within the broad domain of SCRM over the past decade. Thematic and descriptive analysis supported with modern knowledge management techniques brings forward seven distinctive research gaps for future research in SCRM. Overlapping research findings from an industry perspective, coupled with SCRM research gaps from the systematic literature review has helped to define the research problem for this study. The thesis focuses on a holistic and systematic approach to modelling risks within supply chain and logistics networks. The systems thinking approach followed conceptualises the phenomenon of risk propagation utilising several recent case studies, workshop findings and focus studies. Risk propagation is multidimensional and propagates beyond goods, finance and information resource. It cascades into technology, human resource and socio-ecological dimensions. Three risk propagation zones are identified that build the fundamentals for modelling risk behaviour in terms of cost and delay. The development of a structured framework for SCRM, a holistic supply chain risk model and a quantitative research design for risk assessment are the major contributions of this research. The developed risk assessment platform has the ability to capture the fracture points and cascading impact within a supply chain and logistics network. A reputed aerospace and defence organisation in UK was used to test the experimental modelling set up for its viability and for bridging the gap between theory and practice. The combined statistical and simulation modelling approach provides a new perspective to assessing the complex behavioural performance of risks during multiple interactions within network

    Twitter Mining for Syndromic Surveillance

    Get PDF
    Enormous amounts of personalised data is generated daily from social media platforms today. Twitter in particular, generates vast textual streams in real-time, accompanied with personal information. This big social media data offers a potential avenue for inferring public and social patterns. This PhD thesis investigates the use of Twitter data to deliver signals for syndromic surveillance in order to assess its ability to augment existing syndromic surveillance efforts and give a better understanding of symptomatic people who do not seek healthcare advice directly. We focus on a specific syndrome - asthma/difficulty breathing. We seek to develop means of extracting reliable signals from the Twitter signal, to be used for syndromic surveillance purposes. We begin by outlining our data collection and preprocessing methods. However, we observe that even with keyword-based data collection, many of the collected tweets are not relevant because they represent chatter, or talk of awareness instead of an individual suffering a particular condition. In light of this, we set out to identify relevant tweets to collect a strong and reliable signal. We first develop novel features based on the emoji content of Tweets and apply semi-supervised learning techniques to filter Tweets. Next, we investigate the effectiveness of deep learning at this task. We pro-pose a novel classification algorithm based on neural language models, and compare it to existing successful and popular deep learning algorithms. Following this, we go on to propose an attentive bi-directional Recurrent Neural Network architecture for filtering Tweets which also offers additional syndromic surveillance utility by identifying keywords among syndromic Tweets. In doing so, we are not only able to detect alarms, but also have some clues into what the alarm involves. Lastly, we look towards optimizing the Twitter syndromic surveillance pipeline by selecting the best possible keywords to be supplied to the Twitter API. We developed algorithms to intelligently and automatically select keywords such that the quality, in terms of relevance, and quantity of Tweets collected is maximised

    Proceedings of the GIS Research UK 18th Annual Conference GISRUK 2010

    Get PDF
    This volume holds the papers from the 18th annual GIS Research UK (GISRUK). This year the conference, hosted at University College London (UCL), from Wednesday 14 to Friday 16 April 2010. The conference covered the areas of core geographic information science research as well as applications domains such as crime and health and technological developments in LBS and the geoweb. UCL’s research mission as a global university is based around a series of Grand Challenges that affect us all, and these were accommodated in GISRUK 2010. The overarching theme this year was “Global Challenges”, with specific focus on the following themes: * Crime and Place * Environmental Change * Intelligent Transport * Public Health and Epidemiology * Simulation and Modelling * London as a global city * The geoweb and neo-geography * Open GIS and Volunteered Geographic Information * Human-Computer Interaction and GIS Traditionally, GISRUK has provided a platform for early career researchers as well as those with a significant track record of achievement in the area. As such, the conference provides a welcome blend of innovative thinking and mature reflection. GISRUK is the premier academic GIS conference in the UK and we are keen to maintain its outstanding record of achievement in developing GIS in the UK and beyond

    Financial disclosure for diversified operations : a critique of the orthodox model, and some tentative proposals

    Get PDF
    The aggregative information disclosed by all entities will represent a lower quality of reporting in the case of diversified organisations because it will be more difficult to place the information in context. However, the boundaries of industrial activity are poorly defined and this will cause cause substantial difficulties for any attempt to restore the quality of information disclosed by diversified organisations. This study is based on the view that financial reports should be veritable (ie be shown to have real-world referents). The orthodox model for reporting the results of diversified operations requires that allocations be made according to the criterion of benefit. In the, almost inevitable, presence of interaction, such allocations cannot have real-world referents and thus reports drawn up using the orthodox model cannot be veritable. Empirical evidence suggests that the attempt to require the publication of such reports in the UK has yielded uneven and inconsistent information. The treatment of interaction in the literature dealing with the orthodox model is confused. If interaction effects can be identified and measured, they can be disclosed separately or summarised by means of the range of ambiguity. Reports incorporating this information are likely to be highly complex and difficult to interpret. A variety of proposals for dealing with the problems of allocation In financial reports is examined in the context of diversified operations, but the proposals are found to be unsatisfactory. Some tentative suggestions concerning the search for a veridical reporting scheme are made. Finally, the boundary condition management model is developed. Since diversification causes a loss of quality ox Information available to outside parties, it is argued that such parties should be given some control over the process, together with the necessary information. Internal boundary condition management should provide some scope for improving performance' and disclosures relating to this improvement should be made

    Comparative Analysis of Student Learning: Technical, Methodological and Result Assessing of PISA-OECD and INVALSI-Italian Systems .

    Get PDF
    PISA is the most extensive international survey promoted by the OECD in the field of education, which measures the skills of fifteen-year-old students from more than 80 participating countries every three years. INVALSI are written tests carried out every year by all Italian students in some key moments of the school cycle, to evaluate the levels of some fundamental skills in Italian, Mathematics and English. Our comparison is made up to 2018, the last year of the PISA-OECD survey, even if INVALSI was carried out for the last edition in 2022. Our analysis focuses attention on the common part of the reference populations, which are the 15-year-old students of the 2nd class of secondary schools of II degree, where both sources give a similar picture of the students

    Can we forecast conflict? A framework for forecasting global human societal behavior using latent narrative indicators

    Get PDF
    The ability to successfully forecast impending societal unrest, from riots and protests to assassinations and coups, would fundamentally transform the ability of nations to proactively address instability around the world, intervening before unrest accelerates to conflict or prepositioning assets to enhance preventive activity. It would also enhance the ability of social scientists to quantitatively study the underpinnings of how and why grievances transition from agitated individuals to population-scale physical unrest. Recognizing this potential, the US government has funded research on “conflict early warning” and conflict forecasting for more than 40 years and current unclassified approaches incorporate nearly every imaginable type of data from telephone call records to traffic signals, tribal and cultural linkages to satellite imagery. Yet, current approaches have yielded poor outcomes: one recent study showed that the top models of civil war onset miss 90% of the cases they supposedly explain. At the same time, emerging work in the economics disciplines is finding that new approaches, especially those based on latent linguistic indicators, can offer significant predictive power of future physical behavior. The information environment around us records not just factual information, but also a rich array of cultural and contextual influences that offer a window into national consciousness. A growing body of literature has shown that measuring the linguistic dimensions of this real–time consciousness can accurately forecast many broad social behaviors, ranging from box office sales to the stock market itself. In fact, the United States intelligence community believes so strongly in the ability of surface-level indicators to forecast future physical unrest more successfully than current approaches, it now has an entire program devoted to such “Open Source Indicators.” Yet, few studies have explored the application of these methods to the forecasting of non-economic human societal behavior and have primarily focused on large-bore events such as militarized disputes, epidemics, and regime change. One of the reasons for this is the lack of high-resolution cross-national longitudinal data on societal conflict equivalent to the daily indicators available in economics research. This dissertation therefore presents a novel framework for evaluating these new classes of latent-based forecasting measures on high-resolution geographically-enriched quantitative databases of human behavior. To demonstrate this framework, an archive of 4.7 million news articles totaling 1.3 billion words, consisting of the entirety of international news coverage from Agence France Presse, the Associated Press, and Xinhua over the last 30 years, is used to construct a database of more than 29 million global events in over 300 categories using the TABARI coding system and CAMEO event taxonomy, resulting the largest event database created in the academic literature. The framework is then applied to examine the hypothesis of latent forecasting as a classification problem, demonstrating the ability of a simple example-based classifier to not only return potentially actionable forecasts from latent discourse indicators, but to quantitatively model the topical traces of the metanarratives that underlie them. The results of this dissertation demonstrate that this new framework provides a powerful new evaluative environment for exploring the emerging class of latent indicators and modeling approaches and that even rudimentary classification-based models may have significant forecasting potential
    corecore