377 research outputs found

    Task bundling in worker‐centric mobile crowdsensing

    Get PDF
    Most existing research about task allocation in mobile crowdsensing mainly focus on requester-centric mobile crowdsensing (RCMCS), where the requester assigns tasks to workers to maximize his/her benefits. A worker in RCMCS might suffer benefit damage because the tasks assigned to him/her may not maximize his/her benefit. Contrarily, worker-centric mobile crowdsensing (WCMCS), where workers autonomously select tasks to accomplish to maximize their benefits, does not receive enough attention. The workers in WCMCS can maximize their benefits, but the requester in WCMCS will suffer benefit damage (cannot maximize the number of expected completed tasks). It is hard to maximize the number of expected completed tasks in WCMCS, because some tasks may be selected by no workers, while others may be selected by many workers. In this paper, we apply task bundling to address this issue, and we formulate a novel task bundling problem in WCMCS with the objective of maximizing the number of expected completed tasks. To solve this problem, we design an algorithm named LocTrajBundling which bundles tasks based on the location of tasks and the trajectories of workers. Experimental results show that, compared with other algorithms, our algorithm can achieve a better performance in maximizing the number of expected completed tasks

    TASKer: Behavioral insights via campus-based experimental mobile crowd-sourcing

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centres in Singapore Funding Initiativ

    Collaboration trumps homophily in urban mobile crowd-sourcing

    Get PDF
    National Research Foundation (NRF) Singapore under IDM Futures Funding Initiativ

    Designing a Crowd-Based Relocation System—The Case of Car-Sharing

    Get PDF
    Car-sharing services promise environmentally sustainable and cost-efficient alternatives to private car ownership, contributing to more environmentally sustainable mobility. However, the challenge of balancing vehicle supply and demand needs to be addressed for further improvement of the service. Currently, employees must relocate vehicles from low-demand to high-demand areas, which generates extra personnel costs, driven kilometers, and emissions. This study takes a Design Science Research (DSR) approach to develop a new way of balancing the supply and demand of vehicles in car-sharing, namely crowd-based relocation. We base our approach on crowdsourcing, a concept by which customers are requested to perform vehicle relocations. This paper reports on our comprehensive DSR project on designing and instantiating a crowd-based relocation information system (CRIS). We assessed the resulting artifact in a car-sharing simulation and conducted a real world car-sharing service system field test. The evaluation reveals that CRIS has the potential for improving vehicle availability, increasing environmental sustainability, and reducing operational costs. Further, the prescriptive knowledge derived in our DSR project can be used as a starting point to improve individual parts of the CRIS and to extend its application beyond car-sharing into other sharing services, such as power bank- or e-scooter-sharing

    Scalable urban mobile crowdsourcing: Handling uncertainty in worker movement

    Get PDF
    National Research Foundation (NRF) Singapore under International Research Centres in Singapore Funding Initiativ

    Recommending personalized schedules in urban environments

    Get PDF

    21st Century Cottage Industry - A cross-case synthesis of freelancer intermediary platforms

    Get PDF
    The purpose of this study was to identify possible archetypes of freelancer intermediary platforms. Though there is growing interest towards platforms, classification of platforms stops when it is classified as a transaction, innovation, integrated or some other platform. However, this approach doesn’t account for the variation within these categories. Given the young population's interest towards freelancing and the estimated size of the platform economy as a whole ($4300 Bn.) and the number of freelancer intermediaries (250-300), attempting to identify the subtypes of freelancer intermediary platforms was deemed a worthy endeavor. Finding these subtypes of intermediary platforms or archetypes of freelancer intermediaries has both academic and practical implications. For academics, these archetypes will contribute to the growing body of platform literature by giving it new units of analysis and by creating reasonable categorization. For people interested in utilizing a freelancer intermediary platform either as a seller or a buyer, this thesis offers solid knowledge of the intermediary platforms functions and features as well as what to expect when joining one. The research design is built on principles of embedded and flexible multiple-case study and cross-case synthesis. When describing a contemporary phenomenon, a multiple-case study produces more robust results when the weight of one case decreases. The cross-case synthesis was one of the few viable options given the study’s lack of dependent and independent variables. These variables were unavailable because no beforehand information on what the archetypes could be was available. For this reason, this study adapted analytical methods of grounded theory. The study identified four archetypes of freelancer intermediary platforms: the locals, two for the price of one, the middle child and the global juggernauts. Locals focus on physical services that are dependent on freelancers’ location. Two for the price of one are small platforms that charge only one side be it, seller or buyer. The middle child is very similar to global juggernauts in other aspects but the size and is a necessary phase in the platform’s maturation. Global juggernauts are the biggest platforms and the industry leaders that have significant network and trust management systems in place. Archetypes form a solid foundation on which future research on freelancer intermediaries can be based on

    Building a semantic search engine with games and crowdsourcing

    Get PDF
    Semantic search engines aim at improving conventional search with semantic information, or meta-data, on the data searched for and/or on the searchers. So far, approaches to semantic search exploit characteristics of the searchers like age, education, or spoken language for selecting and/or ranking search results. Such data allow to build up a semantic search engine as an extension of a conventional search engine. The crawlers of well established search engines like Google, Yahoo! or Bing can index documents but, so far, their capabilities to recognize the intentions of searchers are still rather limited. Indeed, taking into account characteristics of the searchers considerably extend both, the quantity of data to analyse and the dimensionality of the search problem. Well established search engines therefore still focus on general search, that is, "search for all", not on specialized search, that is, "search for a few". This thesis reports on techniques that have been adapted or conceived, deployed, and tested for building a semantic search engine for the very specific context of artworks. In contrast to, for example, the interpretation of X-ray images, the interpretation of artworks is far from being fully automatable. Therefore artwork interpretation has been based on Human Computation, that is, a software-based gathering of contributions by many humans. The approach reported about in this thesis first relies on so called Games With A Purpose, or GWAPs, for this gathering: Casual games provide an incentive for a potentially unlimited community of humans to contribute with their appreciations of artworks. Designing convenient incentives is less trivial than it might seem at first. An ecosystem of games is needed so as to collect the meta-data on artworks intended for. One game generates the data that can serve as input of another game. This results in semantically rich meta-data that can be used for building up a successful semantic search engine. Thus, a first part of this thesis reports on a "game ecosystem" specifically designed from one known game and including several novel games belonging to the following game classes: (1) Description Games for collecting obvious and trivial meta-data, basically the well-known ESP (for extra-sensorial perception) game of Luis von Ahn, (2) the Dissemination Game Eligo generating translations, (3) the Diversification Game Karido aiming at sharpening differences between the objects, that is, the artworks, interpreted and (3) the Integration Games Combino, Sentiment and TagATag that generate structured meta-data. Secondly, the approach to building a semantic search engine reported about in this thesis relies on Higher-Order Singular Value Decomposition (SVD). More precisely, the data and meta-data on artworks gathered with the afore mentioned GWAPs are collected in a tensor, that is a mathematical structure generalising matrices to more than only two dimensions, columns and rows. The dimensions considered are the artwork descriptions, the players, and the artwork themselves. A Higher-Order SVD of this tensor is first used for noise reduction in This thesis reports also on deploying a Higher-Order LSA. The parallel Higher-Order SVD algorithm applied for the Higher-Order LSA and its implementation has been validated on an application related to, but independent from, the semantic search engine for artworks striven for: image compression. This thesis reports on the surprisingly good image compression which can be achieved with Higher-Order SVD. While compression methods based on matrix SVD for each color, the approach reported about in this thesis relies on one single (higher-order) SVD of the whole tensor. This results in both, better quality of the compressed image and in a significant reduction of the memory space needed. Higher-Order SVD is extremely time-consuming what calls for parallel computation. Thus, a step towards automatizing the construction of a semantic search engine for artworks was parallelizing the higher-order SVD method used and running the resulting parallel algorithm on a super-computer. This thesis reports on using Hestenes’ method and R-SVD for parallelising the higher-order SVD. This method is an unconventional choice which is explained and motivated. As of the super-computer needed, this thesis reports on turning the web browsers of the players or searchers into a distributed parallel computer. This is done by a novel specific system and a novel implementation of the MapReduce data framework to data parallelism. Harnessing the web browsers of the players or searchers saves computational power on the server-side. It also scales extremely well with the number of players or searchers because both, playing with and searching for artworks, require human reflection and therefore results in idle local processors that can be brought together into a distributed super-computer.Semantische Suchmaschinen dienen der Verbesserung konventioneller Suche mit semantischen Informationen, oder Metadaten, zu Daten, nach denen gesucht wird, oder zu den Suchenden. Bisher nutzt Semantische Suche Charakteristika von Suchenden wie Alter, Bildung oder gesprochene Sprache fĂŒr die Auswahl und/oder das Ranking von Suchergebnissen. Solche Daten erlauben den Aufbau einer Semantischen Suchmaschine als Erweiterung einer konventionellen Suchmaschine. Die Crawler der fest etablierten Suchmaschinen wie Google, Yahoo! oder Bing können Dokumente indizieren, bisher sind die FĂ€higkeiten eher beschrĂ€nkt, die Absichten von Suchenden zu erkennen. TatsĂ€chlich erweitert die BerĂŒcksichtigung von Charakteristika von Suchenden betrĂ€chtlich beides, die Menge an zu analysierenden Daten und die DimensionalitĂ€t des Such-Problems. Fest etablierte Suchmaschinen fokussieren deswegen stark auf allgemeine Suche, also "Suche fĂŒr alle", nicht auf spezialisierte Suche, also "Suche fĂŒr wenige". Diese Arbeit berichtet von Techniken, die adaptiert oder konzipiert, eingesetzt und getestet wurden, um eine semantische Suchmaschine fĂŒr den sehr speziellen Kontext von Kunstwerken aufzubauen. Im Gegensatz beispielsweise zur Interpretation von Röntgenbildern ist die Interpretation von Kunstwerken weit weg davon gĂ€nzlich automatisiert werden zu können. Deswegen basiert die Interpretation von Kunstwerken auf menschlichen Berechnungen, also Software-basiertes Sammeln von menschlichen BeitrĂ€gen. Der Ansatz, ĂŒber den in dieser Arbeit berichtet wird, beruht auf sogenannten "Games With a Purpose" oder GWAPs die folgendes sammeln: Zwanglose Spiele bieten einen Anreiz fĂŒr eine potenziell unbeschrĂ€nkte Gemeinde von Menschen, mit Ihrer WertschĂ€tzung von Kunstwerken beizutragen. Geeignete Anreize zu entwerfen in weniger trivial als es zuerst scheinen mag. Ein Ökosystem von Spielen wird benötigt, um Metadaten gedacht fĂŒr Kunstwerke zu sammeln. Ein Spiel erzeugt Daten, die als Eingabe fĂŒr ein anderes Spiel dienen können. Dies resultiert in semantisch reichhaltigen Metadaten, die verwendet werden können, um eine erfolgreiche Semantische Suchmaschine aufzubauen. Deswegen berichtet der erste Teil dieser Arbeit von einem "Spiel-Ökosystem", entwickelt auf Basis eines bekannten Spiels und verschiedenen neuartigen Spielen, die zu verschiedenen Spiel-Klassen gehören. (1) Beschreibungs-Spiele zum Sammeln offensichtlicher und trivialer Metadaten, vor allem dem gut bekannten ESP-Spiel (Extra Sensorische Wahrnehmung) von Luis von Ahn, (2) dem Verbreitungs-Spiel Eligo zur Erzeugung von Übersetzungen, (3) dem Diversifikations-Spiel Karido, das Unterschiede zwischen Objekten, also interpretierten Kunstwerken, schĂ€rft und (3) Integrations-Spiele Combino, Sentiment und Tag A Tag, die strukturierte Metadaten erzeugen. Zweitens beruht der Ansatz zum Aufbau einer semantischen Suchmaschine, wie in dieser Arbeit berichtet, auf SingulĂ€rwertzerlegung (SVD) höherer Ordnung. PrĂ€ziser werden die Daten und Metadaten ĂŒber Kunstwerk gesammelt mit den vorher genannten GWAPs in einem Tensor gesammelt, einer mathematischen Struktur zur Generalisierung von Matrizen zu mehr als zwei Dimensionen, Spalten und Zeilen. Die betrachteten Dimensionen sind die Beschreibungen der Kunstwerke, die Spieler, und die Kunstwerke selbst. Eine SingulĂ€rwertzerlegung höherer Ordnung dieses Tensors wird zuerst zur Rauschreduktion verwendet nach der Methode der sogenannten Latenten Semantischen Analyse (LSA). Diese Arbeit berichtet auch ĂŒber die Anwendung einer LSA höherer Ordnung. Der parallele Algorithmus fĂŒr SingulĂ€rwertzerlegungen höherer Ordnung, der fĂŒr LSA höherer Ordnung verwendet wird, und seine Implementierung wurden validiert an einer verwandten aber von der semantischen Suche unabhĂ€ngig angestrebten Anwendung: Bildkompression. Diese Arbeit berichtet von ĂŒberraschend guter Kompression, die mit SingulĂ€rwertzerlegung höherer Ordnung erzielt werden kann. Neben Matrix-SVD-basierten Kompressionsverfahren fĂŒr jede Farbe, beruht der Ansatz wie in dieser Arbeit berichtet auf einer einzigen SVD (höherer Ordnung) auf dem gesamten Tensor. Dies resultiert in beidem, besserer QualitĂ€t von komprimierten Bildern und einer signifikant geringeren des benötigten Speicherplatzes. SingulĂ€rwertzerlegung höherer Ordnung ist extrem zeitaufwĂ€ndig, was parallele Berechnung verlangt. Deswegen war ein Schritt in Richtung Aufbau einer semantischen Suchmaschine fĂŒr Kunstwerke eine Parallelisierung der verwendeten SVD höherer Ordnung auf einem Super-Computer. Diese Arbeit berichtet vom Einsatz der Hestenes’-Methode und R-SVD zur Parallelisierung der SVD höherer Ordnung. Diese Methode ist eine unkonventionell Wahl, die erklĂ€rt und motiviert wird. Ab nun wird ein Super-Computer benötigt. Diese Arbeit berichtet ĂŒber die Wandlung der Webbrowser von Spielern oder Suchenden in einen verteilten Super-Computer. Dies leistet ein neuartiges spezielles System und eine neuartige Implementierung des MapReduce Daten-Frameworks fĂŒr Datenparallelismus. Das Einspannen der Webbrowser von Spielern und Suchenden spart server-seitige Berechnungskraft. Ebenso skaliert die Berechnungskraft so extrem gut mit der Spieleranzahl oder Suchenden, denn beides, Spiel mit oder Suche nach Kunstwerken, benötigt menschliche Reflektion, was deswegen zu ungenutzten lokalen Prozessoren fĂŒhrt, die zu einem verteilten Super-Computer zusammengeschlossen werden können

    A Comprehensive Survey of Enabling and Emerging Technologies for Social Distancing—Part II: Emerging Technologies and Open Issues

    Get PDF
    This two-part paper aims to provide a comprehensive survey on how emerging technologies, e.g., wireless and networking, artificial intelligence (AI) can enable, encourage, and even enforce social distancing practice. In Part I, an extensive background of social distancing is provided, and enabling wireless technologies are thoroughly surveyed. In this Part II, emerging technologies such as machine learning, computer vision, thermal, ultrasound, etc., are introduced. These technologies open many new solutions and directions to deal with problems in social distancing, e.g., symptom prediction, detection and monitoring quarantined people, and contact tracing. Finally, we discuss open issues and challenges (e.g., privacy-preserving, scheduling, and incentive mechanisms) in implementing social distancing in practice. As an example, instead of reacting with ad-hoc responses to COVID-19-like pandemics in the future, smart infrastructures (e.g., next-generation wireless systems like 6G, smart home/building, smart city, intelligent transportation systems) should incorporate a pandemic mode in their standard architectures/designs
    • 

    corecore