632 research outputs found

    Human factors and the WWW : making sense of URLs

    Get PDF
    We present a study of how WWW users ‘make sense’ of URLs. Experiments were used to investigate users’ capacity to employ the URL as a surrogate for the resource to which it refers. The results show that users can infer useful information from URLs, but that such improvisation has shortcomings as a navigation aid

    Political Homophily in Independence Movements: Analysing and Classifying Social Media Users by National Identity

    Get PDF
    Social media and data mining are increasingly being used to analyse political and societal issues. Here we undertake the classification of social media users as supporting or opposing ongoing independence movements in their territories. Independence movements occur in territories whose citizens have conflicting national identities; users with opposing national identities will then support or oppose the sense of being part of an independent nation that differs from the officially recognised country. We describe a methodology that relies on users' self-reported location to build large-scale datasets for three territories -- Catalonia, the Basque Country and Scotland. An analysis of these datasets shows that homophily plays an important role in determining who people connect with, as users predominantly choose to follow and interact with others from the same national identity. We show that a classifier relying on users' follow networks can achieve accurate, language-independent classification performances ranging from 85% to 97% for the three territories.Comment: Accepted for publication in IEEE Intelligent System

    Pick-n-mix approaches to technology supply : XML as a standard “glue” linking universalised locals

    Get PDF
    We report on our experiences in a participatory design project to develop ICTs in a hospital ward working with deliberate self-harm patients. This project involves the creation and constant re-creation of sociotechnical ensembles in which XML-related technologies may come to play vital roles. The importance of these technologies arises from the aim underlying the project of creating systems that are shaped in locally meaningful ways but reach beyond their immediate context to gain wider importance. We argue that XML is well placed to play the role of "glue" that binds multiple such systems together. We analyse the implications of localised systems development for technology supply and argue that inscriptions that are evident in XML-related standards are and will be very important for the uptake of XML technologies

    Making the Most of Tweet-Inherent Features for Social Spam Detection on Twitter

    Get PDF
    Social spam produces a great amount of noise on social media services such as Twitter, which reduces the signal-to-noise ratio that both end users and data mining applications observe. Existing techniques on social spam detection have focused primarily on the identification of spam accounts by using extensive historical and network-based data. In this paper we focus on the detection of spam tweets, which optimises the amount of data that needs to be gathered by relying only on tweet-inherent features. This enables the application of the spam detection system to a large set of tweets in a timely fashion, potentially applicable in a real-time or near real-time setting. Using two large hand-labelled datasets of tweets containing spam, we study the suitability of five classification algorithms and four different feature sets to the social spam detection task. Our results show that, by using the limited set of features readily available in a tweet, we can achieve encouraging results which are competitive when compared against existing spammer detection systems that make use of additional, costly user features. Our study is the first that attempts at generalising conclusions on the optimal classifiers and sets of features for social spam detection over different datasets

    If you build it, will they come? How researchers perceive and use web 2.0

    Get PDF
    Over the past 15 years, the web has transformed the way we seek and use information. In the last 5 years in particular a set of innovative techniques – collectively termed ‘web 2.0’ – have enabled people to become producers as well as consumers of information. It has been suggested that these relatively easy-to-use tools, and the behaviours which underpin their use, have enormous potential for scholarly researchers, enabling them to communicate their research and its findings more rapidly, broadly and effectively than ever before. This report is based on a study commissioned by the Research Information Network to investigate whether such aspirations are being realised. It seeks to improve our currently limited understanding of whether, and if so how, researchers are making use of various web 2.0 tools in the course of their work, the factors that encourage or inhibit adoption, and researchers’ attitudes towards web 2.0 and other forms of communication. Context: How researchers communicate their work and their findings varies in different subjects or disciplines, and in different institutional settings. Such differences have a strong influence on how researchers approach the adoption – or not – of new information and communications technologies. It is also important to stress that ‘web 2.0’ encompasses a wide range of interactions between technologies and social practices which allow web users to generate, repurpose and share content with each other. We focus in this study on a range of generic tools – wikis, blogs and some social networking systems – as well as those designed specifically by and for people within the scholarly community. Method: Our study was designed not only to capture current attitudes and patterns of adoption but also to identify researchers’ needs and aspirations, and problems that they encounter. We began with an online survey, which collected information about researchers’ information gathering and dissemination habits and their attitudes towards web 2.0. This was followed by in-depth, semi-structured interviews with a stratified sample of survey respondents to explore in more depth their experience of web 2.0, including perceived barriers as well as drivers to adoption. Finally, we undertook five case studies of web 2.0 services to investigate their development and adoption across different communities and business models. Key findings: Our study indicates that a majority of researchers are making at least occasional use of one or more web 2.0 tools or services for purposes related to their research: for communicating their work; for developing and sustaining networks and collaborations; or for finding out about what others are doing. But frequent or intensive use is rare, and some researchers regard blogs, wikis and other novel forms of communication as a waste of time or even dangerous. In deciding if they will make web 2.0 tools and services part of their everyday practice, the key questions for researchers are the benefits they may secure from doing so, and how it fits with their use of established services. Researchers who use web 2.0 tools and services do not see them as comparable to or substitutes for other channels and means of communication, but as having their own distinctive role for specific purposes and at particular stages of research. And frequent use of one kind of tool does not imply frequent use of others as well

    Standardisation and innovation

    Get PDF
    The paper discusses the relations that exist between standards on the one hand, and innovation and implementation on the other. We will argue that these activities must not be considered separately, especially since standards-based components are going to play an increasingly important role in implementation processes

    ATHENE : Assistive technologies for healthy living in elders : needs assessment by ethnography

    Get PDF
    Numerous assistive technologies to support independent living –including personal alarms, mobile phones, self-monitoring devices, mobility aids, software apps and home adaptations –have been developed over the years, but their uptake by older people, especially those from minority ethnic groups, is poor. This paper outlines the ways in which the ATHENE project seeks to redress this situation by producing a richer understanding of the complex and diverse living experiences and care needs of older people and exploring how industry, the NHS, social services and third sector can work with the older people themselves to ‘co-produce’ useful and useable ALT designs to meet their needs. In this paper, we provide an overview of the project methodology and discuss some of the issues it raises for the design and development process

    Healthcare technologies and professional vision

    Get PDF
    This paper presents some details from an observational evaluation of a computer assisted detection tool in mammography. The use of the tool, its strengths and weaknesses, are documented and its impact on reader's 'professional vision' (Goodwin 1994) considered. The paper suggests issues for the design, use and, importantly, evaluation of new technologies in everyday medical work, pointing to general issues concerning trust – users’ perception of the dependability of the evidence generated by such tools and suggesting that evaluations require an emphasis on the complex issue of what technologies afford their users in everyday work

    Hidden work and the challenges of scalability and sustainability in ambulatory assisted living

    Get PDF
    Assisted living technologies may help people live independently while also—potentially—reducing health and care costs. But they are notoriously difficult to implement at scale and many devices are abandoned following initial adoption.We report findings from a study of global positioning system (GPS) tracking devices intended to support the independent living of people with cognitive impairment. Our aims were threefold: to understand (through ethnography) such individuals’ lived experience of GPS tracking; to facilitate (through action research) the customization and adaptation of technologies and care services to provide effective, ongoing support; and to explore the possibilities for a co-production methodology that would enable people with cognitive impairment and their families to work with professionals and technical designers to shape these devices and services to meet their particular needs in a sustainable way.We found that the articulation work needed for maintaining the GPS technology in “working order” was extensive and ongoing. This articulation work does not merely supplement formal procedures, a lot of it is needed to get round them, but it is also often invisible and thus its importance goes largely unrecognized. If GPS technologies are to be implemented at scale and sustainably, methods must be found to capitalize on the skills and tacit knowledge held within the care network (professional and lay) to resolve problems, improve device design, devise new service solutions, and foster organizational learning

    Towards Real-Time, Country-Level Location Classification of Worldwide Tweets

    Get PDF
    In contrast to much previous work that has focused on location classification of tweets restricted to a specific country, here we undertake the task in a broader context by classifying global tweets at the country level, which is so far unexplored in a real-time scenario. We analyse the extent to which a tweet's country of origin can be determined by making use of eight tweet-inherent features for classification. Furthermore, we use two datasets, collected a year apart from each other, to analyse the extent to which a model trained from historical tweets can still be leveraged for classification of new tweets. With classification experiments on all 217 countries in our datasets, as well as on the top 25 countries, we offer some insights into the best use of tweet-inherent features for an accurate country-level classification of tweets. We find that the use of a single feature, such as the use of tweet content alone -- the most widely used feature in previous work -- leaves much to be desired. Choosing an appropriate combination of both tweet content and metadata can actually lead to substantial improvements of between 20\% and 50\%. We observe that tweet content, the user's self-reported location and the user's real name, all of which are inherent in a tweet and available in a real-time scenario, are particularly useful to determine the country of origin. We also experiment on the applicability of a model trained on historical tweets to classify new tweets, finding that the choice of a particular combination of features whose utility does not fade over time can actually lead to comparable performance, avoiding the need to retrain. However, the difficulty of achieving accurate classification increases slightly for countries with multiple commonalities, especially for English and Spanish speaking countries.Comment: Accepted for publication in IEEE Transactions on Knowledge and Data Engineering (IEEE TKDE
    corecore