193 research outputs found
Modeling and Simulation Study of Designerâs Bidirectional Behavior of Task Selection in Open Source Design Process
Open source design (OSD) is an emerging mode of product design. In OSD process, how to select right tasks directly influences the efficiency and quality of task completion, hence impacting the whole evolution process of OSD. In this paper, designerâs bidirectional behavior of task selection integrating passive selection based on website recommendation and autonomous selection is modeled. First, the model of passive selection behavior by website recommendation is proposed with application of collaborative filtering algorithm, based on a three-dimensional matrix including information of design agents, tasks, and skills; second, the model of autonomous selection behavior is described in consideration of factors such as skill and incentive; third, the model of bidirectional selection behavior is described integrating the aforementioned two selection algorithms. At last, contrast simulation analysis of bidirectional selection, passive selection based on website recommendation, and autonomous selection is proposed with ANOVA, and results show that task selection behavior has significant effect on OSD evolution process and that bidirectional selection behavior is more effective to shorten evolution cycle according to the experiment settings. In addition, the simulation study testifies the model of bidirectional selection by describing the task selection process of OSD in microperspective
Analyse intelligente de la qualité d'expérience (QoE) dans les réseaux de diffusion de contenu web et mutimédia
Today user experience is becoming a reliable indicator for service providers and telecommunication operators to convey overall end to end system functioning. Moreover, to compete for a prominent market share, different network operators and service providers should retain and increase the customersâ subscription. To fulfil these requirements they require an efficient Quality of Experience (QoE) monitoring and estimation. However, QoE is a subjective metric and its evaluation is expensive and time consuming since it requires human participation. Therefore, there is a need for an objective tool that can measure the QoE objectively with reasonable accuracy in real-Time. As a first contribution, we analyzed the impact of network conditions on Video on Demand (VoD) services. We also proposed an objective QoE estimation tool that uses fuzzy expert system to estimate QoE from network layer QoS parameters. As a second contribution, we analyzed the impact of MAC layer QoS parameters on VoD services over IEEE 802.11n wireless networks. We also proposed an objective QoE estimation tool that uses random neural network to estimate QoE from the MAC layer perspective. As our third contribution, we analyzed the effect of different adaption scenarios on QoE of adaptive bit rate streaming. We also developed a web based subjective test platform that can be easily integrated in a crowdsourcing platform for performing subjective tests. As our fourth contribution, we analyzed the impact of different web QoS parameters on web service QoE. We also proposed a novel machine learning algorithm i.e. fuzzy rough hybrid expert system for estimating web service QoE objectivelyDe nos jours, lâexpĂ©rience de l'utilisateur appelĂ© en anglais « User Experience » est devenue lâun des indicateurs les plus pertinents pour les fournisseurs de services ainsi que pour les opĂ©rateurs de tĂ©lĂ©communication pour analyser le fonctionnement de bout en bout de leurs systĂšmes (du terminal client, en passant par le rĂ©seaux jusquâĂ lâinfrastructure des services etc.). De plus, afin dâentretenir leur part de marchĂ© et rester compĂ©titif, les diffĂ©rents opĂ©rateurs de tĂ©lĂ©communication et les fournisseurs de services doivent constamment conserver et accroĂźtre le nombre de souscription des clients. Pour rĂ©pondre Ă ces exigences, ils doivent disposer de solutions efficaces de monitoring et dâestimation de la qualitĂ© d'expĂ©rience (QoE) afin dâĂ©valuer la satisfaction de leur clients. Cependant, la QoE est une mesure qui reste subjective et son Ă©valuation est coĂ»teuse et fastidieuse car elle nĂ©cessite une forte participation humaine (appelĂ© panel de dâĂ©valuation). Par consĂ©quent, la conception dâun outil qui peut mesurer objectivement cette qualitĂ© d'expĂ©rience avec une prĂ©cision raisonnable et en temps rĂ©el est devenue un besoin primordial qui constitue un challenge intĂ©ressant Ă rĂ©soudre. Comme une premiĂšre contribution, nous avons analysĂ© l'impact du comportement dâun rĂ©seau sur la qualitĂ© des services de vidĂ©o Ă la demande (VOD). Nous avons Ă©galement proposĂ© un outil d'estimation objective de la QoE qui utilise le systĂšme expert basĂ© sur la logique floue pour Ă©valuer la QoE Ă partir des paramĂštres de qualitĂ© de service de la couche rĂ©seau. Dans une deuxiĂšme contribution, nous avons analysĂ© l'impact des paramĂštres QoS de couche MAC sur les services de VoD dans le cadre des rĂ©seaux sans fil IEEE 802.11n. Nous avons Ă©galement proposĂ© un outil d'estimation objective de la QoE qui utilise le rĂ©seau alĂ©atoire de neurones pour estimer la QoE dans la perspective de la couche MAC. Pour notre troisiĂšme contribution, nous avons analysĂ© l'effet de diffĂ©rents scĂ©narios d'adaptation sur la QoE dans le cadre du streaming adaptatif au dĂ©bit. Nous avons Ă©galement dĂ©veloppĂ© une plate-Forme Web de test subjectif qui peut ĂȘtre facilement intĂ©grĂ© dans une plate-Forme de crowd-Sourcing pour effectuer des tests subjectifs. Finalement, pour notre quatriĂšme contribution, nous avons analysĂ© l'impact des diffĂ©rents paramĂštres de qualitĂ© de service Web sur leur QoE. Nous avons Ă©galement proposĂ© un algorithme d'apprentissage automatique i.e. un systĂšme expert hybride rugueux basĂ© sur la logique floue pour estimer objectivement la QoE des Web service
A Semi-supervised Sensing Rate Learning based CMAB Scheme to Combat COVID-19 by Trustful Data Collection in the Crowd
Mobile CrowdSensing (MCS), through employing considerable workers to sense
and collect data in a participatory manner, has been recognized as a promising
paradigm for building many large-scale applications in a cost-effective way,
such as combating COVID-19. The recruitment of trustworthy and high-quality
workers is an important research issue for MCS. Previous studies assume that
the qualities of workers are known in advance, or the platform knows the
qualities of workers once it receives their collected data. In reality, to
reduce their costs and thus maximize revenue, many strategic workers do not
perform their sensing tasks honestly and report fake data to the platform. So,
it is very hard for the platform to evaluate the authenticity of the received
data. In this paper, an incentive mechanism named Semi-supervision based
Combinatorial Multi-Armed Bandit reverse Auction (SCMABA) is proposed to solve
the recruitment problem of multiple unknown and strategic workers in MCS.
First, we model the worker recruitment as a multi-armed bandit reverse auction
problem, and design an UCB-based algorithm to separate the exploration and
exploitation, considering the Sensing Rates (SRs) of recruited workers as the
gain of the bandit. Next, a Semi-supervised Sensing Rate Learning (SSRL)
approach is proposed to quickly and accurately obtain the workers' SRs, which
consists of two phases, supervision and self-supervision. Last, SCMABA is
designed organically combining the SRs acquisition mechanism with multi-armed
bandit reverse auction, where supervised SR learning is used in the
exploration, and the self-supervised one is used in the exploitation. We prove
that our SCMABA achieves truthfulness and individual rationality. Additionally,
we exhibit outstanding performances of the SCMABA mechanism through in-depth
simulations of real-world data traces.Comment: 18 pages, 14 figure
Mobile data and computation offloading in mobile cloud computing
Le trafic mobile augmente considérablement en raison de la popularité des appareils mobiles et des applications mobiles. Le déchargement de données mobiles est une solution permettant de réduire la congestion du réseau cellulaire. Le déchargement de calcul mobile peut déplacer les tùches de calcul d'appareils mobiles vers le cloud. Dans cette thÚse, nous étudions d'abord le problÚme du déchargement de données mobiles dans l'architecture du cloud computing mobile. Afin de minimiser les coûts de transmission des données, nous formulons le processus de déchargement des données sous la forme d'un processus de décision de Markov à horizon fini. Nous proposons deux algorithmes de déchargement des données pour un coût minimal. Ensuite, nous considérons un marché sur lequel un opérateur de réseau mobile peut vendre de la bande passante à des utilisateurs mobiles. Nous formulons ce problÚme sous la forme d'une enchÚre comportant plusieurs éléments afin de maximiser les bénéfices de l'opérateur de réseau mobile. Nous proposons un algorithme d'optimisation robuste et deux algorithmes itératifs pour résoudre ce problÚme. Enfin, nous nous concentrons sur les problÚmes d'équilibrage de charge afin de minimiser la latence du déchargement des calculs. Nous formulons ce problÚme comme un jeu de population. Nous proposons deux algorithmes d'équilibrage de la charge de travail basés sur la dynamique évolutive et des protocoles de révision. Les résultats de la simulation montrent l'efficacité et la robustesse des méthodes proposées.Global mobile traffic is increasing dramatically due to the popularity of smart mobile devices and data hungry mobile applications. Mobile data offloading is considered as a promising solution to alleviate congestion in cellular network. Mobile computation offloading can move computation intensive tasks and large data storage from mobile devices to cloud. In this thesis, we first study mobile data offloading problem under the architecture of mobile cloud computing. In order to minimize the overall cost for data delivery, we formulate the data offloading process, as a finite horizon Markov decision process, and we propose two data offloading algorithms to achieve minimal communication cost. Then, we consider a mobile data offloading market where mobile network operator can sell bandwidth to mobile users. We formulate this problem as a multi-item auction in order to maximize the profit of mobile network operator. We propose one robust optimization algorithm and two iterative algorithms to solve this problem. Finally, we investigate computation offloading problem in mobile edge computing. We focus on workload balancing problems to minimize the transmission latency and computation latency of computation offloading. We formulate this problem as a population game, in order to analyze the aggregate offloading decisions, and we propose two workload balancing algorithms based on evolutionary dynamics and revision protocols. Simulation results show the efficiency and robustness of our proposed methods
Understanding mobile network quality and infrastructure with user-side measurements
Measurement collection is a primary step towards analyzing and optimizing performance
of a telecommunication service. With an Mobile Broadband (MBB) network,
the measurement process has not only to track the networkâs Quality of Service (QoS)
features but also to asses a userâs perspective about its service performance. The later
requirement leads to âuser-side measurementsâ which assist in discovery of performance
issues that makes a user of a service unsatisfied and finally switch to another
network.
User-side measurements also serve as first-hand survey of the problem domain. In
this thesis, we exhibit the potential in the measurements collected at network edge by
considering two well-known approaches namely crowdsourced and distributed testbed-based
measurements. Primary focus is on exploiting crowdsourced measurements
while dealing with the challenges associated with it. These challenges consist of differences
in sampling densities at different parts of the region, skewed and non-uniform
measurement layouts, inaccuracy in sampling locations, differences in RSS readings
due to device-diversity and other non-ideal measurement sampling characteristics. In
presence of heterogeneous characteristics of the user-side measurements we propose
how to accurately detect mobile coverage holes, to devise sample selection process
so to generate a reliable radio map with reduced sample cost, and to identify cellular
infrastructure at places where the information is not public. Finally, the thesis unveils
potential of a distributed measurement test-bed in retrieving performance features
from domains including userâs context, service content and network features, and understanding
impact from these features upon the MBB service at the application layer.
By taking web-browsing as a case study, it further presents an objective web-browsing
Quality of Experience (QoE) model
Graph-based Heuristic Solution for Placing Distributed Video Processing Applications on Moving Vehicle Clusters
Vehicular fog computing (VFC) is envisioned as an extension of cloud and mobile edge computing to utilize the rich sensing and processing resources available in vehicles. We focus on slow-moving cars that spend a significant time in urban traffic congestion as a potential pool of onboard sensors, video cameras, and processing capacity. For leveraging the dynamic network and processing resources, we utilize a stochastic mobility model to select nodes with similar mobility patterns. We then design two distributed applications that are scaled in real-time and placed as multiple instances on selected vehicular fog nodes. We handle the unstable vehicular environment by a), Using real vehicle density data to build a realistic mobility model that helps in selecting nodes for service deployment b), Using communitydetection algorithms for selecting a robust vehicular cluster using the predicted mobility behavior of vehicles. The stability of the chosen cluster is validated using a graph centrality measure, and c), Graph-based placement heuristics is developed to find the optimal placement of service graphs based on a multi-objective constrained optimization problem with the objective of efficient resource utilization. The heuristic solves an important problem of processing data generated from distributed devices by balancing the trade-off between increasing the number of service instances to have enough redundancy of processing instances to increase resilience in the service in case of node or link failure, versus reducing their number to minimize resource usage. We compare our heuristic to a mixed integer program (MIP) solution and a first-fit heuristic. Our approach performs better than these comparable schemes in terms of resource utilization and/or has a lesser service latency when compared to an edge computingbased service placement scheme
Designing for quality in real-world mobile crowdsourcing systems
PhD ThesisCrowdsourcing has emerged as a popular means to collect and analyse data on a scale for
problems that require human intelligence to resolve. Its prompt response and low cost have
made it attractive to businesses and academic institutions. In response, various online
crowdsourcing platforms, such as Amazon MTurk, Figure Eight and Prolific have successfully
emerged to facilitate the entire crowdsourcing process. However, the quality of results has
been a major concern in crowdsourcing literature. Previous work has identified various key
factors that contribute to issues of quality and need to be addressed in order to produce high
quality results. Crowd tasks design, in particular, is a major key factor that impacts the
efficiency and effectiveness of crowd workers as well as the entire crowdsourcing process.
This research investigates crowdsourcing task designs to collect and analyse two distinct types
of data, and examines the value of creating high-quality crowdwork activities on new
crowdsource enabled systems for end-users. The main contribution of this research includes 1)
a set of guidelines for designing crowdsourcing tasks that support quality collection, analysis
and translation of speech and eye tracking data in real-world scenarios; and 2) Crowdsourcing
applications that capture real-world data and coordinate the entire crowdsourcing process to
analyse and feed quality results back. Furthermore, this research proposes a new quality control
method based on workers trust and self-verification. To achieve this, the research follows the
case study approach with a focus on two real-world data collection and analysis case studies.
The first case study, Speeching, explores real-world speech data collection, analysis, and
feedback for people with speech disorder, particularly with Parkinsonâs. The second case study,
CrowdEyes, examines the development and use of a hybrid system combined of crowdsourcing
and low-cost DIY mobile eye trackers for real-world visual data collection, analysis, and
feedback. Both case studies have established the capability of crowdsourcing to obtain high
quality responses comparable to that of an expert. The Speeching app, and the provision of
feedback in particular were well perceived by the participants. This opens up new opportunities
in digital health and wellbeing. Besides, the proposed crowd-powered eye tracker is fully
functional under real-world settings. The results showed how this approach outperforms all
current state-of-the-art algorithms under all conditions, which opens up the technology for wide
variety of eye tracking applications in real-world settings
QoE-Aware Resource Allocation For Crowdsourced Live Streaming: A Machine Learning Approach
In the last decade, empowered by the technological advancements of mobile devices
and the revolution of wireless mobile network access, the world has witnessed an
explosion in crowdsourced live streaming. Ensuring a stable high-quality playback
experience is compulsory to maximize the viewersâ Quality of Experience and the
content providersâ profits. This can be achieved by advocating a geo-distributed cloud
infrastructure to allocate the multimedia resources as close as possible to viewers, in
order to minimize the access delay and video stalls.
Additionally, because of the instability of network condition and the heterogeneity of
the end-users capabilities, transcoding the original video into multiple bitrates is
required. Video transcoding is a computationally expensive process, where generally a
single cloud instance needs to be reserved to produce one single video bitrate
representation. On demand renting of resources or inadequate resources reservation
may cause delay of the video playback or serving the viewers with a lower quality. On
the other hand, if resources provisioning is much higher than the required, the
extra resources will be wasted.
In this thesis, we introduce a prediction-driven resource allocation framework, to
maximize the QoE of viewers and minimize the resources allocation cost. First, by
exploiting the viewersâ locations available in our unique dataset, we implement a machine learning model to predict the viewersâ number near each geo-distributed cloud
site. Second, based on the predicted results that showed to be close to the actual values,
we formulate an optimization problem to proactively allocate resources at the viewersâ
proximity. Additionally, we will present a trade-off between the video access delay and
the cost of resource allocation.
Considering the complexity and infeasibility of our offline optimization to respond to
the volume of viewing requests in real-time, we further extend our work, by introducing
a resources forecasting and reservation framework for geo-distributed cloud sites. First,
we formulate an offline optimization problem to allocate transcoding resources at the
viewersâ proximity, while creating a tradeoff between the network cost and viewers
QoE. Second, based on the optimizer resource allocation decisions on historical live
videos, we create our time series datasets containing historical records of the optimal
resources needed at each geo-distributed cloud site. Finally, we adopt machine learning
to build our distributed time series forecasting models to proactively forecast the exact
needed transcoding resources ahead of time at each geo-distributed cloud site.
The results showed that the predicted number of transcoding resources needed in each
cloud site is close to the optimal number of transcoding resources
Proceedings der 11. Internationalen Tagung Wirtschaftsinformatik (WI2013) - Band 1
The two volumes represent the proceedings of the 11th International Conference on Wirtschaftsinformatik WI2013 (Business Information Systems). They include 118 papers from ten research tracks, a general track and the Student Consortium. The selection of all submissions was subject to a double blind procedure with three reviews for each paper and an overall acceptance rate of 25 percent. The WI2013 was organized at the University of Leipzig between February 27th and March 1st, 2013 and followed the main themes Innovation, Integration and Individualization.:Track 1: Individualization and Consumerization
Track 2: Integrated Systems in Manufacturing Industries
Track 3: Integrated Systems in Service Industries
Track 4: Innovations and Business Models
Track 5: Information and Knowledge ManagementDie zweibĂ€ndigen TagungsbĂ€nde zur 11. Internationalen Tagung Wirtschaftsinformatik (WI2013) enthalten 118 ForschungsbeitrĂ€ge aus zehn thematischen Tracks der Wirtschaftsinformatik, einem General Track sowie einem Student Consortium. Die Selektion der Artikel erfolgte nach einem Double-Blind-Verfahren mit jeweils drei Gutachten und fĂŒhrte zu einer Annahmequote von 25%. Die WI2013 hat vom 27.02. - 01.03.2013 unter den Leitthemen Innovation, Integration und Individualisierung an der UniversitĂ€t Leipzig stattgefunden.:Track 1: Individualization and Consumerization
Track 2: Integrated Systems in Manufacturing Industries
Track 3: Integrated Systems in Service Industries
Track 4: Innovations and Business Models
Track 5: Information and Knowledge Managemen
- âŠ