1,589 research outputs found

    Quality of Information in Mobile Crowdsensing: Survey and Research Challenges

    Full text link
    Smartphones have become the most pervasive devices in people's lives, and are clearly transforming the way we live and perceive technology. Today's smartphones benefit from almost ubiquitous Internet connectivity and come equipped with a plethora of inexpensive yet powerful embedded sensors, such as accelerometer, gyroscope, microphone, and camera. This unique combination has enabled revolutionary applications based on the mobile crowdsensing paradigm, such as real-time road traffic monitoring, air and noise pollution, crime control, and wildlife monitoring, just to name a few. Differently from prior sensing paradigms, humans are now the primary actors of the sensing process, since they become fundamental in retrieving reliable and up-to-date information about the event being monitored. As humans may behave unreliably or maliciously, assessing and guaranteeing Quality of Information (QoI) becomes more important than ever. In this paper, we provide a new framework for defining and enforcing the QoI in mobile crowdsensing, and analyze in depth the current state-of-the-art on the topic. We also outline novel research challenges, along with possible directions of future work.Comment: To appear in ACM Transactions on Sensor Networks (TOSN

    Crisis Analytics: Big Data Driven Crisis Response

    Get PDF
    Disasters have long been a scourge for humanity. With the advances in technology (in terms of computing, communications, and the ability to process and analyze big data), our ability to respond to disasters is at an inflection point. There is great optimism that big data tools can be leveraged to process the large amounts of crisis-related data (in the form of user generated data in addition to the traditional humanitarian data) to provide an insight into the fast-changing situation and help drive an effective disaster response. This article introduces the history and the future of big crisis data analytics, along with a discussion on its promise, challenges, and pitfalls

    guifi.net, a crowdsourced network infrastructure held in common

    Get PDF
    The expression “crowdsourced computer networks” refers to a network infrastructure built by citizens and organisations who pool their resources and coordinate their efforts to make these networks happen. “Community networks” are a subset of crowdsourced networks that are structured to be open, free, and neutral. In these communities the infrastructure is established by the participants and is managed as a common resource. Many crowdsourcing experiences have flourished in community networks. This paper discusses the case of guifi.net, a success case of a community network daily used by thousands of participants, focusing on its principles and the crowdsourcing processes and tools developed within the community, and the role they play in the ecosystem that is guifi.net; the current status of its implementation; its measurable local impact; and the lessons learned in more than a decade.Peer ReviewedPostprint (author's final draft

    Cellular LTE and Solar Energy Harvesting for Long-Term, Reliable Urban Sensor Networks: Challenges and Opportunities

    Full text link
    In a world driven by data, cities are increasingly interested in deploying networks of smart city devices for urban and environmental monitoring. To be successful, these networks must be reliable, scalable, real-time, low-cost, and easy to install and maintain -- criteria that are all significantly affected by the design choices around connectivity and power. LTE networks and solar energy can seemingly both satisfy the necessary criteria and are often used in real-world sensor network deployments. However, there have not been extensive real-world studies to examine how well such networks perform and the challenges they encounter in urban settings over long periods. In this work, we analyze the performance of a stationary 118-node LTE-connected, solar-powered sensor network over one year in Chicago. Results show the promise of LTE networks and solar panels for city-wide IoT deployments, but also reveal areas for improvement. Notably, we find 11 sites with inadequate RSS to support sensing nodes and over 33,000 hours of data loss due to solar energy availability issues between October and March. Furthermore, we discover that the neighborhoods most affected by connectivity and charging issues are socioeconomically disadvantaged areas with a majority Black and Latine residents. This work presents observations from a networking and powering perspective of the urban sensor network to help drive reliable, scalable future smart city deployments. The work also analyzes the impact of land use, adaptive energy harvesting management strategies, and shortcomings of open data, to support the need for increased real-world deployments that ensure the design of equitable smart city networks

    Validating generic metrics of fairness in game-based resource allocation scenarios with crowdsourced annotations

    Get PDF
    Being able to effectively measure the notion of fairness is of vital importance as it can provide insight into the formation and evolution of complex patterns and phenomena, such as social preferences, collaboration, group structures and social conflicts. This paper presents a comparative study for quantitatively modelling the notion of fairness in one-to-many resource allocation scenarios - i.e. one provider agent has to allocate resources to multiple receiver agents. For this purpose, we investigate the efficacy of six metrics and cross-validate them on crowdsourced human ranks of fairness annotated through a computer game implementation of the one-to-many resource allocation scenario. Four of the fairness metrics examined are well-established metrics of data dispersion, namely standard deviation, normalised entropy, the Gini coefficient and the fairness index. The fifth metric, proposed by the authors, is an ad-hoc context-based measure which is based on key aspects of distribution strategies. The sixth metric, finally, is machine learned via ranking support vector machines (SVMs) on the crowdsourced human perceptions of fairness. Results suggest that all ad-hoc designed metrics correlate well with the human notion of fairness, and the context-based metrics we propose appear to have a predictability advantage over the other ad-hoc metrics. On the other hand, the normalised entropy and fairness index metrics appear to be the most expressive and generic for measuring fairness for the scenario adopted in this study and beyond. The SVM model can automatically model fairness more accurately than any ad-hoc metric examined (with an accuracy of 81.86%) but it is limited by its expressivity and generalisability.Being able to effectively measure the notion of fairness is of vital importance as it can provide insight into the formation and evolution of complex patterns and phenomena, such as social preferences, collaboration, group structures and social conflicts. This paper presents a comparative study for quantitatively modelling the notion of fairness in one-to-many resource allocation scenarios - i.e. one provider agent has to allocate resources to multiple receiver agents. For this purpose, we investigate the efficacy of six metrics and cross-validate them on crowdsourced human ranks of fairness annotated through a computer game implementation of the one-to-many resource allocation scenario. Four of the fairness metrics examined are well-established metrics of data dispersion, namely standard deviation, normalised entropy, the Gini coefficient and the fairness index. The fifth metric, proposed by the authors, is an ad-hoc context-based measure which is based on key aspects of distribution strategies. The sixth metric, finally, is machine learned via ranking support vector machines (SVMs) on the crowdsourced human perceptions of fairness. Results suggest that all ad-hoc designed metrics correlate well with the human notion of fairness, and the context-based metrics we propose appear to have a predictability advantage over the other ad-hoc metrics. On the other hand, the normalised entropy and fairness index metrics appear to be the most expressive and generic for measuring fairness for the scenario adopted in this study and beyond. The SVM model can automatically model fairness more accurately than any ad-hoc metric examined (with an accuracy of 81.86%) but it is limited by its expressivity and generalisability.peer-reviewe
    • …
    corecore