1,355 research outputs found

    Crowdsourced Rumour Identification During Emergencies

    Get PDF
    When a significant event occurs, many social media users leverage platforms such as Twitter to track that event. Moreover, emergency response agencies are increasingly looking to social media as a source of real-time information about such events. However, false information and rumours are often spread during such events, which can influence public opinion and limit the usefulness of social media for emergency management. In this paper, we present an initial study into rumour identification during emergencies using crowdsourcing. In particular, through an analysis of three tweet datasets relating to emergency events from 2014, we propose a taxonomy of tweets relating to rumours. We then perform a crowdsourced labeling experiment to determine whether crowd assessors can identify rumour-related tweets and where such labeling can fail. Our results show that overall, agreement over the tweet labels produced were high (0.7634 Fleiss Kappa), indicating that crowd-based rumour labeling is possible. However, not all tweets are of equal difficulty to assess. Indeed, we show that tweets containing disputed/controversial information tend to be some of the most difficult to identify

    On Varix, with Special Reference to Varicose Veins in the Lower Extremities and Varicocele

    Get PDF

    Research and teaching staff in developing countries rate the value of libraries higher than in the West

    Get PDF
    Nell McCreadie elaborates on the findings from a recent study on the exploring the value of academic libraries in developing countries. Case studies indicated that there is a clear need for better promotion of resources, awareness raising and skills development. However, this is not just a case of internal promotion, but also a case of developing external relationships with the scholarly community to promote advocacy for the library

    Biogenetic-Type Syntheses in the Rosane Series

    Get PDF
    The diterpenoid mould metabolites, rosenonolactone (29) and deoxyrosenonolactone (46), have been synthesised from isocupressic acid (128) by a route patterned on their known biogenetic pathway. The key step in the proposed synthetic sequence, conversion of a bicyclic labdane to a tricyclic rosane, was first perfected on model labdadienols which lack the C-19 carboxyl group necessary for subsequent lactone formation. On brief acid treatment the labdadienols (70), (91), (92) and (93) were cyclised to pimaradienes (87) and (88). The mechanistic and stereochemical details of this cyclisation were investigated. Upon prolonged acid treatment the initially formed pimaradienes were converted to the desired rosadienes (111) and (112). Using the experience gained in the model series, isocupressic acid was similarly transformed into the acid (134) which had the correct C-4, C-8, C-9 and C-13 configurations required for a synthesis of the metabolites. The remaining problems of lactonisation, and, in the case of rosenonolactone, introduction of a C-7 carbonyl group, were jointly overcome by a method which employed the epoxide (149) as a key intermediate. Information on the hydrocarbon products resulting from acid rearrangement of the pimaradienes (87) and (88) was obtained from a combined gas chromatographic-mass spectral analysis using deuterated derivatives. No tetracyclic structures were detected and the principal product was shown to have the 'mixed' rosa-abietadiene skeleton (217)

    The Design and implementation of a virtual type specimen

    Get PDF
    Typesetting and typography have become misunderstood terms, if not obsolete, due to the proliferation of computers. Since every computer user has access to a myriad of typefaces and the ability to modify type through software, the true qualities of many typefaces have become unrecognizable. Therefore, it is incumbent upon people to seek out the information necessary to be competent in the area of design and typography. A well-designed type specimen is the first place to start for a user to reference typographic terminology and rules. Technical characteristics of typefaces such as tracking, font metrics, leading and unique recognizable characters could prove to be an invaluable tool for many people, not just designers. In addition to a type specimen being a subtle tutorial, it should also be educational. Computer technology has opened the door for many people and has increased their exposure to typefaces and typography. On the other hand, it has also led many to believe that by typing on the computer, they are setting type correctly. There are many rules and guidelines that should be followed to insure the correct usage of type, both aesthetically and typographically. And everyone - designers, printers, typesetters and the general public, need to learn and practice these rules and terminology to uphold the standards of typography that have evolved over the centuries. Type specimen books have been used for centuries, and due to the influx of technology, that same information is easily accessible on a computer. Many designers were initially reluctant to use a specimen on the computer. It did not seem as \u27pure\u27 as a type specimen book. The computer would add glitz and glimmer while sacrificing quality and quantity of information. But as more and more specimens became available on the computer, it became obvious that huge amounts of information could be included and continually updated, whereas it would be far too much to include in a book. A well- designed userfriendly specimen could be an invaluable tool to a designer. Many specimens, while being aesthetically pleasing, were not useful as a reference tool or educational aid. While everyone has their own reasons for wanting specific attributes in a specimen, these needs all begin to intermix and overlap, thusly forming a complex entity. Designers, printers and typesetters may all have different needs, but they all have one thing in common: everyone wants a useable and helpful type specimen book. An interactive type specimen is a necessary and valuable tool for any professional in the graphic industry. The ease of use, the ability to keep it updated, and the cohesive format make an interactive specimen useful to all levels of type users. Since printed material is outdated as soon as it is completed, an interactive piece has the flexibility to be continually updated - keeping the users up-to-date on the trends, news and information regarding type and typography

    On the Reproducibility and Generalisation of the Linear Transformation of Word Embeddings

    Get PDF
    Linear transformation is a way to learn a linear relationship between two word embeddings, such that words in the two different embedding spaces can be semantically related. In this paper, we examine the reproducibility and generalisation of the linear transformation of word embeddings. Linear transformation is particularly useful when translating word embedding models in different languages, since it can capture the semantic relationships between two models. We first reproduce two linear transformation approaches, a recent one using orthogonal transformation and the original one using simple matrix transformation. Previous findings on a machine translation task are re-examined, validating that linear transformation is indeed an effective way to transform word embedding models in different languages. In particular, we show that the orthogonal transformation can better relate the different embedding models. Following the verification of previous findings, we then study the generalisation of linear transformation in a multi-language Twitter election classification task. We observe that the orthogonal transformation outperforms the matrix transformation. In particular, it significantly outperforms the random classifier by at least 10% under the F1 metric across English and Spanish datasets. In addition, we also provide best practices when using linear transformation for multi-language Twitter election classification

    A Study of Realtime Summarization Metrics

    Get PDF
    Unexpected news events, such as natural disasters or other human tragedies, create a large volume of dynamic text data from official news media as well as less formal social media. Automatic real-time text summarization has become an important tool for quickly transforming this overabundance of text into clear, useful information for end-users including affected individuals, crisis responders, and interested third parties. Despite the importance of real-time summarization systems, their evaluation is not well understood as classic methods for text summarization are inappropriate for real-time and streaming conditions. The TREC 2013-2015 Temporal Summarization (TREC-TS) track was one of the first evaluation campaigns to tackle the challenges of real-time summarization evaluation, introducing new metrics, ground-truth generation methodology and dataset. In this paper, we present a study of TREC-TS track evaluation methodology, with the aim of documenting its design, analyzing its effectiveness, as well as identifying improvements and best practices for the evaluation of temporal summarization systems

    Explicit diversification of event aspects for temporal summarization

    Get PDF
    During major events, such as emergencies and disasters, a large volume of information is reported on newswire and social media platforms. Temporal summarization (TS) approaches are used to automatically produce concise overviews of such events by extracting text snippets from related articles over time. Current TS approaches rely on a combination of event relevance and textual novelty for snippet selection. However, for events that span multiple days, textual novelty is often a poor criterion for selecting snippets, since many snippets are textually unique but are semantically redundant or non-informative. In this article, we propose a framework for the diversification of snippets using explicit event aspects, building on recent works in search result diversification. In particular, we first propose two techniques to identify explicit aspects that a user might want to see covered in a summary for different types of event. We then extend a state-of-the-art explicit diversification framework to maximize the coverage of these aspects when selecting summary snippets for unseen events. Through experimentation over the TREC TS 2013, 2014, and 2015 datasets, we show that explicit diversification for temporal summarization significantly outperforms classical novelty-based diversification, as the use of explicit event aspects reduces the amount of redundant and off-topic snippets returned, while also increasing summary timeliness
    • тАж
    corecore