10,223 research outputs found

    360 Quantified Self

    Get PDF
    Wearable devices with a wide range of sensors have contributed to the rise of the Quantified Self movement, where individuals log everything ranging from the number of steps they have taken, to their heart rate, to their sleeping patterns. Sensors do not, however, typically sense the social and ambient environment of the users, such as general life style attributes or information about their social network. This means that the users themselves, and the medical practitioners, privy to the wearable sensor data, only have a narrow view of the individual, limited mainly to certain aspects of their physical condition. In this paper we describe a number of use cases for how social media can be used to complement the check-up data and those from sensors to gain a more holistic view on individuals' health, a perspective we call the 360 Quantified Self. Health-related information can be obtained from sources as diverse as food photo sharing, location check-ins, or profile pictures. Additionally, information from a person's ego network can shed light on the social dimension of wellbeing which is widely acknowledged to be of utmost importance, even though they are currently rarely used for medical diagnosis. We articulate a long-term vision describing the desirable list of technical advances and variety of data to achieve an integrated system encompassing Electronic Health Records (EHR), data from wearable devices, alongside information derived from social media data.Comment: QCRI Technical Repor

    Design of Remote Datalogger Connection and Live Data Tweeting System

    Get PDF
    Low-Impact Development (LID) is an attempt to sustainably respond to the potential hazards posed by urban expansion. Green roofs are an example of LID design meant to reduce the amount of runoff from storm events that are becoming more intense and less predictable while also providing insulation to buildings. LID has not yet been widely adopted as it is often a more expensive alternative to conventional infrastructure (Bowman et. al., 2009). However, its benefits are apparent. The University of Arkansas Honors College awarded a grant to research the large green roof atop Hillside Auditorium. One part of this grant is aimed at educating the public on the benefits LID infrastructure and encourage its development. To accomplish this task, a Raspberry Pi was programmed to operate in tandem with a Campbell Scientific CR1000 datalogger to collect, organize and tweet data to the public under the moniker, “Rufus the Roof.” It is believed that personifying the roof allows data to be conveyed in an entertaining manner that promotes education and public engagement in the LID design. The Raspberry Pi was initially intended to collect data and publish tweets automatically on a live basis. However, automation was not realized due to time constraints and challenges in establishing connection to the datalogger. Instead, a system was developed that allowed the remote transfer of environmental data files from a datalogger on the green roof. Along with remote file transfer protocol, several Python scripts were written that enabled tweets to be published by the Raspberry Pi. The design was successful. Manual remote file transfer and tweeting was achieved. Full automation remains to be achieved, but the Python scripts are built with the capability to operate automatically. The conditions are in place for future development of the project in order to achieve full autonomy. A fully automated system could open the doors for more widespread public engagement in the value and benefits of Low-Impact Development initiatives

    Crisis Analytics: Big Data Driven Crisis Response

    Get PDF
    Disasters have long been a scourge for humanity. With the advances in technology (in terms of computing, communications, and the ability to process and analyze big data), our ability to respond to disasters is at an inflection point. There is great optimism that big data tools can be leveraged to process the large amounts of crisis-related data (in the form of user generated data in addition to the traditional humanitarian data) to provide an insight into the fast-changing situation and help drive an effective disaster response. This article introduces the history and the future of big crisis data analytics, along with a discussion on its promise, challenges, and pitfalls

    CommuniSense: Crowdsourcing Road Hazards in Nairobi

    Get PDF
    Nairobi is one of the fastest growing metropolitan cities and a major business and technology powerhouse in Africa. However, Nairobi currently lacks monitoring technologies to obtain reliable data on traffic and road infrastructure conditions. In this paper, we investigate the use of mobile crowdsourcing as means to gather and document Nairobi's road quality information. We first present the key findings of a city-wide road quality survey about the perception of existing road quality conditions in Nairobi. Based on the survey's findings, we then developed a mobile crowdsourcing application, called CommuniSense, to collect road quality data. The application serves as a tool for users to locate, describe, and photograph road hazards. We tested our application through a two-week field study amongst 30 participants to document various forms of road hazards from different areas in Nairobi. To verify the authenticity of user-contributed reports from our field study, we proposed to use online crowdsourcing using Amazon's Mechanical Turk (MTurk) to verify whether submitted reports indeed depict road hazards. We found 92% of user-submitted reports to match the MTurkers judgements. While our prototype was designed and tested on a specific city, our methodology is applicable to other developing cities.Comment: In Proceedings of 17th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2015

    Demographic Inference and Representative Population Estimates from Multilingual Social Media Data

    Get PDF
    Social media provide access to behavioural data at an unprecedented scale and granularity. However, using these data to understand phenomena in a broader population is difficult due to their non-representativeness and the bias of statistical inference tools towards dominant languages and groups. While demographic attribute inference could be used to mitigate such bias, current techniques are almost entirely monolingual and fail to work in a global environment. We address these challenges by combining multilingual demographic inference with post-stratification to create a more representative population sample. To learn demographic attributes, we create a new multimodal deep neural architecture for joint classification of age, gender, and organization-status of social media users that operates in 32 languages. This method substantially outperforms current state of the art while also reducing algorithmic bias. To correct for sampling biases, we propose fully interpretable multilevel regression methods that estimate inclusion probabilities from inferred joint population counts and ground-truth population counts. In a large experiment over multilingual heterogeneous European regions, we show that our demographic inference and bias correction together allow for more accurate estimates of populations and make a significant step towards representative social sensing in downstream applications with multilingual social media.Comment: 12 pages, 10 figures, Proceedings of the 2019 World Wide Web Conference (WWW '19

    Sensing Subjective Well-being from Social Media

    Full text link
    Subjective Well-being(SWB), which refers to how people experience the quality of their lives, is of great use to public policy-makers as well as economic, sociological research, etc. Traditionally, the measurement of SWB relies on time-consuming and costly self-report questionnaires. Nowadays, people are motivated to share their experiences and feelings on social media, so we propose to sense SWB from the vast user generated data on social media. By utilizing 1785 users' social media data with SWB labels, we train machine learning models that are able to "sense" individual SWB from users' social media. Our model, which attains the state-by-art prediction accuracy, can then be used to identify SWB of large population of social media users in time with very low cost.Comment: 12 pages, 1 figures, 2 tables, 10th International Conference, AMT 2014, Warsaw, Poland, August 11-14, 2014. Proceeding

    Program your city: Designing an urban integrated open data API

    Get PDF
    Cities accumulate and distribute vast sets of digital information. Many decision-making and planning processes in councils, local governments and organisations are based on both real-time and historical data. Until recently, only a small, carefully selected subset of this information has been released to the public – usually for specific purposes (e.g. train timetables, release of planning application through websites to name just a few). This situation is however changing rapidly. Regulatory frameworks, such as the Freedom of Information Legislation in the US, the UK, the European Union and many other countries guarantee public access to data held by the state. One of the results of this legislation and changing attitudes towards open data has been the widespread release of public information as part of recent Government 2.0 initiatives. This includes the creation of public data catalogues such as data.gov.au (U.S.), data.gov.uk (U.K.), data.gov.au (Australia) at federal government levels, and datasf.org (San Francisco) and data.london.gov.uk (London) at municipal levels. The release of this data has opened up the possibility of a wide range of future applications and services which are now the subject of intensified research efforts. Previous research endeavours have explored the creation of specialised tools to aid decision-making by urban citizens, councils and other stakeholders (Calabrese, Kloeckl & Ratti, 2008; Paulos, Honicky & Hooker, 2009). While these initiatives represent an important step towards open data, they too often result in mere collections of data repositories. Proprietary database formats and the lack of an open application programming interface (API) limit the full potential achievable by allowing these data sets to be cross-queried. Our research, presented in this paper, looks beyond the pure release of data. It is concerned with three essential questions: First, how can data from different sources be integrated into a consistent framework and made accessible? Second, how can ordinary citizens be supported in easily composing data from different sources in order to address their specific problems? Third, what are interfaces that make it easy for citizens to interact with data in an urban environment? How can data be accessed and collected
    corecore