655 research outputs found

    StakeNet: using social networks to analyse the stakeholders of large-scale software projects

    Get PDF
    Many software projects fail because they overlook stakeholders or involve the wrong representatives of significant groups. Unfortunately, existing methods in stakeholder analysis are likely to omit stakeholders, and consider all stakeholders as equally influential. To identify and prioritise stakeholders, we have developed StakeNet, which consists of three main steps: identify stakeholders and ask them to recommend other stakeholders and stakeholder roles, build a social network whose nodes are stakeholders and links are recommendations, and prioritise stakeholders using a variety of social network measures. To evaluate StakeNet, we conducted one of the first empirical studies of requirements stakeholders on a software project for a 30,000-user system. Using the data collected from surveying and interviewing 68 stakeholders, we show that StakeNet identifies stakeholders and their roles with high recall, and accurately prioritises them. StakeNet uncovers a critical stakeholder role overlooked in the project, whose omission significantly impacted project success

    StakeSource: harnessing the power of crowdsourcing and social networks in stakeholder analysis

    Get PDF
    Projects often fail because they overlook stakeholders. Unfortunately, existing stakeholder analysis tools only capture stakeholders' information, relying on experts to manually identify them. StakeSource is a web-based tool that automates stakeholder analysis. It "crowdsources" the stakeholders themselves for recommendations about other stakeholders and aggregates their answers using social network analysis

    Ocular hypertension in myopia: analysis of contrast sensitivity

    Get PDF
    Purpose: we evaluated the evolution of contrast sensitivity reduction in patients affected by ocular hypertension and glaucoma, with low to moderate myopia. We also evaluated the relationship between contrast sensitivity and mean deviation of visual field. Material and methods: 158 patients (316 eyes), aged between 38 and 57 years old, were enrolled and divided into 4 groups: emmetropes, myopes, myopes with ocular hypertension (IOP≥21 ±2 mmHg), myopes with glaucoma. All patients underwent anamnestic and complete eye evaluation, tonometric curves with Goldmann’s applanation tonometer, cup/disc ratio evaluation, gonioscopy by Goldmann’s three-mirrors lens, automated perimetry (Humphrey 30-2 full-threshold test) and contrast sensitivity evaluation by Pelli-Robson charts. A contrast sensitivity under 1,8 Logarithm of the Minimum Angle of Resolution (LogMAR) was considered abnormal. Results: contrast sensitivity was reduced in the group of myopes with ocular hypertension (1,788 LogMAR) and in the group of myopes with glaucoma (1,743 LogMAR), while it was preserved in the group of myopes (2,069 LogMAR) and in the group of emmetropes (1,990 LogMAR). We also found a strong correlation between contrast sensitivity reduction and mean deviation of visual fields in myopes with glaucoma (coefficient relation = 0.86) and in myopes with ocular hypertension (coefficient relation = 0.78). Conclusions: the contrast sensitivity assessment performed by the Pelli-Robson test should be performed in all patients with middle-grade myopia, ocular hypertension and optic disc suspected for glaucoma, as it may be useful in the early diagnosis of the disease. Introduction Contrast can be defined as the ability of the eye to discriminate differences in luminance between the stimulus and the background. The sensitivity to contrast is represented by the inverse of the minimal contrast necessary to make an object visible; the lower the contrast the greater the sensitivity, and the other way around. Contrast sensitivity is a fundamental aspect of vision together with visual acuity: the latter defines the smallest spatial detail that the subject manages to discriminate under optimal conditions, but it only provides information about the size of the stimulus that the eye is capable to perceive; instead, the evaluation of contrast sensitivity provides information not obtainable with only the measurement of visual acuity, as it establishes the minimum difference in luminance that must be present between the stimulus and its background so that the retina is adequately stimulated to perceive the stimulus itself. The clinical methods of examining contrast sensitivity (lattices, luminance gradients, variable-contrast optotypic tables and lowcontrast optotypic tables) relate the two parameters on which the ability to distinctly perceive an object depends, namely the different luminance degree of the two adjacent areas and the spatial frequency, which is linked to the size of the object. The measurement of contrast sensitivity becomes valuable in the diagnosis and follow up of some important eye conditions such as glaucoma. Studies show that contrast sensitivity can be related to data obtained with the visual perimetry, especially with the perimetric damage of the central area and of the optic nerve head

    Social interactions or business transactions? What customer reviews disclose about Airbnb marketplace

    Get PDF
    Airbnb is one of the most successful examples of sharing economy marketplaces. With rapid and global market penetration, understanding its attractiveness and evolving growth opportunities is key to plan business decision making. There is an ongoing debate, for example, about whether Airbnb is a hospitality service that fosters social exchanges between hosts and guests, as the sharing economy manifesto originally stated, or whether it is (or is evolving into being) a purely business transaction platform, the way hotels have traditionally operated. To answer these questions, we propose a novel market analysis approach that exploits customers’ reviews. Key to the approach is a method that combines thematic analysis and machine learning to inductively develop a custom dictionary for guests’ reviews. Based on this dictionary, we then use quantitative linguistic analysis on a corpus of 3.2 million reviews collected in 6 different cities, and illustrate how to answer a variety of market research questions, at fine levels of temporal, thematic, user and spatial granularity, such as (i) how the business vs social dichotomy is evolving over the years, (ii) what exact words within such top-level categories are evolving, (iii) whether such trends vary across different user segments and (iv) in different neighbourhoods

    Analyzing and predicting the spatial penetration of Airbnb in U.S. cities

    Get PDF
    In the hospitality industry, the room and apartment sharing platform of Airbnb has been accused of unfair competition. Detractors have pointed out the chronic lack of proper legislation. Unfortunately, there is little quantitative evidence about Airbnb's spatial penetration upon which to base such a legislation. In this study, we analyze Airbnb's spatial distribution in eight U.S. urban areas, in relation to both geographic, socio-demographic, and economic information. We find that, despite being very different in terms of population composition, size, and wealth, all eight cities exhibit the same pattern: that is, areas of high Airbnb presence are those occupied by the \newpart{``talented and creative''} classes, and those that are close to city centers. This result is consistent so much so that the accuracy of predicting Airbnb's spatial penetration is as high as 0.725

    Online Popularity and Topical Interests through the Lens of Instagram

    Full text link
    Online socio-technical systems can be studied as proxy of the real world to investigate human behavior and social interactions at scale. Here we focus on Instagram, a media-sharing online platform whose popularity has been rising up to gathering hundred millions users. Instagram exhibits a mixture of features including social structure, social tagging and media sharing. The network of social interactions among users models various dynamics including follower/followee relations and users' communication by means of posts/comments. Users can upload and tag media such as photos and pictures, and they can "like" and comment each piece of information on the platform. In this work we investigate three major aspects on our Instagram dataset: (i) the structural characteristics of its network of heterogeneous interactions, to unveil the emergence of self organization and topically-induced community structure; (ii) the dynamics of content production and consumption, to understand how global trends and popular users emerge; (iii) the behavior of users labeling media with tags, to determine how they devote their attention and to explore the variety of their topical interests. Our analysis provides clues to understand human behavior dynamics on socio-technical systems, specifically users and content popularity, the mechanisms of users' interactions in online environments and how collective trends emerge from individuals' topical interests.Comment: 11 pages, 11 figures, Proceedings of ACM Hypertext 201

    BREXIT: Psychometric Profiling the Political Salubrious Through Machine Learning: Predicting personality traits of Boris Johnson through Twitter political text

    Get PDF
    Whilst the CIA have been using psychometric profiling for decades, Cambridge Analytica showed that people\u27s psychological characteristics can be accurately predicted from their digital footprints, such as their Facebook or Twitter accounts. To exploit this form of psychological assessment from digital footprints, we propose machine learning methods for assessing political personality from Twitter. We have extracted the tweet content of Prime Minster Boris Johnson’s Twitter account and built three predictive personality models based on his Twitter political content. We use a Multi-Layer Perceptron Neural network, a Naive Bayes multinomial model and a Support Machine Vector model to predict the OCEAN model which consists of the Big Five personality factors from a sample of 3355 political tweets. The approach vectorizes political tweets, then it learns word vector representations as embeddings from spaCy that are then used to feed a supervised learner classifier. We demonstrate the effectiveness of the approach by measuring the quality of the predictions for each trait per model from a classification algorithm. Our findings show that all three models compute the personality trait “Openness” with the Support Machine Vector model achieving the highest accuracy. “Extraversion” achieved the second highest accuracy personality score by the Multi-Layer Perceptron neural network and Support Machine Vector model

    The Digital Life of Walkable Streets

    Full text link
    Walkability has many health, environmental, and economic benefits. That is why web and mobile services have been offering ways of computing walkability scores of individual street segments. Those scores are generally computed from survey data and manual counting (of even trees). However, that is costly, owing to the high time, effort, and financial costs. To partly automate the computation of those scores, we explore the possibility of using the social media data of Flickr and Foursquare to automatically identify safe and walkable streets. We find that unsafe streets tend to be photographed during the day, while walkable streets are tagged with walkability-related keywords. These results open up practical opportunities (for, e.g., room booking services, urban route recommenders, and real-estate sites) and have theoretical implications for researchers who might resort to the use social media data to tackle previously unanswered questions in the area of walkability.Comment: 10 pages, 7 figures, Proceedings of International World Wide Web Conference (WWW 2015

    City form and well-being: what makes London neighborhoods good places to live?

    Get PDF
    What is the relationship between urban form and citizens’ well-being? In this paper, we propose a quantitative approach to help answer this question, inspired by theories developed within the fields of architecture and population health. The method extracts a rich set of metrics of urban form and well-being from openly accessible datasets. Using linear regression analysis, we identify a model which can explain 30% of the variance of well-being when applied to Greater London, UK. Outcomes of this research can inform the discussion on how to design cities which foster the wellbeing of their residents
    • …
    corecore