2,128 research outputs found

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Intelligent Techniques to Accelerate Everyday Text Communication

    Get PDF
    People with some form of speech- or motor-impairments usually use a high-tech augmentative and alternative communication (AAC) device to communicate with other people in writing or in face-to-face conversations. Their text entry rate on these devices is slow due to their motor abilities. Making good letter or word predictions can help accelerate the communication of such users. In this dissertation, we investigated several approaches to accelerate input for AAC users. First, considering that an AAC user is participating in a face-to-face conversation, we investigated whether performing speech recognition on the speaking-side can improve next word predictions. We compared the accuracy of three plausible microphone deployment options and the accuracy of two commercial speech recognition engines. We found that despite recognition word error rates of 7-16%, our ensemble of n-gram and recurrent neural network language models made predictions nearly as good as when they used the reference transcripts. In a user study with 160 participants, we also found that increasing number of prediction slots in a keyboard interface does not necessarily correlate to improved performance. Second, typing every character in a text message may require an AAC user more time or effort than strictly necessary. Skipping spaces or other characters may be able to speed input and reduce an AAC user\u27s physical input effort. We designed a recognizer optimized for expanding noisy abbreviated input where users often omitted spaces and mid-word vowels. We showed using neural language models for selecting conversational-style training text and for rescoring the recognizer\u27s n-best sentences improved accuracy. We found accurate abbreviated input was possible even if a third of characters was omitted. In a study where users had to dwell for a second on each key, we found sentence abbreviated input was competitive with a conventional keyboard with word predictions. Finally, AAC keyboards rely on language modeling to auto-correct noisy typing and to offer word predictions. While today language models can be trained on huge amounts of text, pre-trained models may fail to capture the unique writing style and vocabulary of individual users. We demonstrated improved performance compared to a unigram cache by adapting to a user\u27s text via language models based on prediction by partial match (PPM) and recurrent neural networks. Our best model ensemble increased keystroke savings by 9.6%

    Hydrolink 4/2022. Citizen science

    Get PDF
    Topic: Citizen Scienc

    Challenges and opportunities to develop a smart city: A case study of Gold Coast, Australia

    Get PDF
    With the rapid growth of information and communication technologies, there is a growing interest in developing smart cities with a focus on the knowledge economy, use of sensors and mobile technologies to plan and manage cities. The proponents argue that these emerging technologies have potential application in efficiently managing the environment and infrastructure, promoting economic development and actively engaging the public, thus contributing to building safe, healthy, sustainable and resilient cities. However, are there other important elements in addition to technologies which can contribute to the creation of smart cities? What are some of the challenges and opportunities for developing a smart city? This paper aims to answer these questions by developing a conceptual framework for smart cities. The framework is then applied to the city of Gold Coast to identify challenges and opportunities for developing the city into a ‘smart city’. Gold Coast is a popular tourist city of about 600,000 populations in South East Queensland, Australia, at the southern end of the 240km long coastal conurbation that is centred by Brisbane. Recently, IBM has nominated Gold Coast as one of the three cities in Australia for its Smarter Cities Challenge Grant. The grant will provide the Gold Coast City Council with the opportunity to collaborate with a group of experts from IBM to develop strategies for enhancing its ICT arrangements for disaster response capabilities. Gold Coast, meanwhile, has potential to diversify its economy from being centred on tourism to a knowledge economy with focus on its educational institutions, investments in cultural precincts and high quality lifestyle amenities. These provide a unique opportunity for building Gold Coast as an important smart city in the region. As part of the research methodology, the paper will review relevant policies of the council. Finally, lessons will be drawn from the case study for other cities which seek to establish themselves as smart cities

    Motivational Principles and Personalisation Needs for Geo-Crowdsourced Intangible Cultural Heritage Mobile Applications

    Get PDF
    Whether it’s for altruistic reasons, personal gains, or third party’s interests, users are influenced by different kinds of motivations when making use of mobile geo-crowdsourcing applications (geoCAs). These reasons, extrinsic and/or intrinsic, must be factored in when evaluating the use intention of these applications and how effective they are. A functional geoCA, particularly if designed for Volunteered Geographic Information (VGI), is the one that persuades and engages its users, by accounting for their diversity of needs across a period of time. This paper explores a number of proven and novel motivational factors destined for the preservation and collection of Intangible Cultural Heritage (ICH) through geoCAs. By providing an overview of personalisation research and digital behaviour interventions for geo-crowdsoured ICH, the paper examines the most relevant usability and trigger factors for different crowd users, supported by a range of technology-based principles. In addition, we present the case of StoryBee, a mobile geoCA designed for “crafting stories” by collecting and sharing users’ generated content based on their location and favourite places. We conclude with an open-ended discussion about the ongoing challenges and opportunities arising from the deployment of geoCAs for ICH

    Modeling users interacting with smart devices

    Get PDF

    Spatial and Temporal Sentiment Analysis of Twitter data

    Get PDF
    The public have used Twitter world wide for expressing opinions. This study focuses on spatio-temporal variation of georeferenced Tweets’ sentiment polarity, with a view to understanding how opinions evolve on Twitter over space and time and across communities of users. More specifically, the question this study tested is whether sentiment polarity on Twitter exhibits specific time-location patterns. The aim of the study is to investigate the spatial and temporal distribution of georeferenced Twitter sentiment polarity within the area of 1 km buffer around the Curtin Bentley campus boundary in Perth, Western Australia. Tweets posted in campus were assigned into six spatial zones and four time zones. A sentiment analysis was then conducted for each zone using the sentiment analyser tool in the Starlight Visual Information System software. The Feature Manipulation Engine was employed to convert non-spatial files into spatial and temporal feature class. The spatial and temporal distribution of Twitter sentiment polarity patterns over space and time was mapped using Geographic Information Systems (GIS). Some interesting results were identified. For example, the highest percentage of positive Tweets occurred in the social science area, while science and engineering and dormitory areas had the highest percentage of negative postings. The number of negative Tweets increases in the library and science and engineering areas as the end of the semester approaches, reaching a peak around an exam period, while the percentage of negative Tweets drops at the end of the semester in the entertainment and sport and dormitory area. This study will provide some insights into understanding students and staff ’s sentiment variation on Twitter, which could be useful for university teaching and learning management
    • 

    corecore