2,100 research outputs found
Hybrid crowd-powered approach for compatibility testing of mobile devices and applications
Testing mobile applications (apps) to ensure they work seamlessly on all devices can be difficult and expensive, especially for small development teams or companies due to limited available resources. There is a need for methods to outsource testing on all these models and OS versions. In this paper, we propose a crowdsourced testing approach that leverages the power of the crowd to perform mobile device compatibility testing in a novel way. This approach aims to provide support for testing code, features, or hardware characteristics of mobile devices which is hardly investigated. This testing will enable developers to ensure that features and hardware characteristics of any device model or features of a specific OS version will work correctly and will not cause any problems with their apps. It empowers developers to find a solution for issues they may face during the development of an app by asking testers to perform a test or searching a knowledge base provided with the platform. It will offer the ability to add a new issue or add a solution to existing issues. We expect that these capabilities will improve the testing and development of mobile apps by considering variant mobile devices and OS versions on the crowd
Mobile devices compatibility testing strategy via crowdsourcing
Purpose - This paper aims to support small mobile application development teams or companies performing testing on a large variety of operating systems versions and mobile devices to ensure their seamless working. Design/methodology/approach - This paper proposes a “hybrid crowdsourcing” method that leverages the power of public crowd testers. This leads to generating a novel crowdtesting workflow Developer/Tester- Crowdtesting (DT-CT) that focuses on developers and crowd testers as key elements in the testing process without the need for intermediate as managers or leaders. This workflow has been used in a novel crowdtesting platform (AskCrowd2Test). This platform enables testing the compatibility of mobile devices and applications at two different levels, high-level (device characteristics) or low-level (code). Additionally, a “crowd-powered knowledge base” has been developed that stores testing results, relevant issues and their solutions. Findings - The comparison of the presented DT-CT workflow with the common and most recent crowdtesting workflows showed that DT-CT may positively impact the testing process by reducing time-consuming and budget spend because of the direct interaction of developers and crowd testers. Originality/value - To authors’ knowledge, this paper is the first to propose crowdtesting workflow based on developers and public crowd testers without crowd managers or leaders, which light the beacon for the future research in this field. Additionally, this work is the first that authorizes crowd testers with a limited level of experience to participate in the testing process, which helps in studying the behaviors and interaction of end-users with apps and obtains more concrete results
Taming Android Fragmentation through Lightweight Crowdsourced Testing
Android fragmentation refers to the overwhelming diversity of Android devices
and OS versions. These lead to the impossibility of testing an app on every
supported device, leaving a number of compatibility bugs scattered in the
community and thereby resulting in poor user experiences. To mitigate this, our
fellow researchers have designed various works to automatically detect such
compatibility issues. However, the current state-of-the-art tools can only be
used to detect specific kinds of compatibility issues (i.e., compatibility
issues caused by API signature evolution), i.e., many other essential types of
compatibility issues are still unrevealed. For example, customized OS versions
on real devices and semantic changes of OS could lead to serious compatibility
issues, which are non-trivial to be detected statically. To this end, we
propose a novel, lightweight, crowdsourced testing approach, LAZYCOW, to fill
this research gap and enable the possibility of taming Android fragmentation
through crowdsourced efforts. Specifically, crowdsourced testing is an emerging
alternative to conventional mobile testing mechanisms that allow developers to
test their products on real devices to pinpoint platform-specific issues.
Experimental results on thousands of test cases on real-world Android devices
show that LAZYCOW is effective in automatically identifying and verifying
API-induced compatibility issues. Also, after investigating the user experience
through qualitative metrics, users' satisfaction provides strong evidence that
LAZYCOW is useful and welcome in practice
Overcoming the Digital Divide: SMS-Powered Crowdfunding Models for Marginalized Regions
Crowdfunding literature primarily assumes the phenomenon as internet based. With the untapped potential of crowdfunding activities in marginalized regions, little is known of the viability of non-internet-based crowdfunding models in explaining crowdfunding success and how they compare with internet-based models. Non-internet-based crowdfunding models proliferate due to digital divide infringements. This research leverages fit-viability perspectives and crowdfunding literature to explain the significant differences in utilizing either model for crowdfunding. Based on our analysis, SMS-powered crowdfunding models offer a more equitable opportunity for success in terms of both social and economic readiness, as compared to internet-based models. We offer theoretical and practical implications to support our analysis
Managing big data experiments on smartphones
The explosive number of smartphones with ever growing sensing and computing capabilities have brought a paradigm shift to many traditional domains of the computing field. Re-programming smartphones and instrumenting them for application testing and data gathering at scale is currently a tedious and time-consuming process that poses significant logistical challenges. Next generation smartphone applications are expected to be much larger-scale and complex, demanding that these undergo evaluation and testing under different real-world datasets, devices and conditions. In this paper, we present an architecture for managing such large-scale data management experiments on real smartphones. We particularly present the building blocks of our architecture that encompassed smartphone sensor data collected by the crowd and organized in our big data repository. The given datasets can then be replayed on our testbed comprising of real and simulated smartphones accessible to developers through a web-based interface. We present the applicability of our architecture through a case study that involves the evaluation of individual components that are part of a complex indoor positioning system for smartphones, coined Anyplace, which we have developed over the years. The given study shows how our architecture allows us to derive novel insights into the performance of our algorithms and applications, by simplifying the management of large-scale data on smartphones
Recommended from our members
Leveraging the Power of Crowds: Automated Test Report Processing for The Maintenance of Mobile Applications
Crowdsourcing is an emerging distributed problem-solving model combining human and machine computation. It collects intelligence and knowledge from a large and diverse workforce to complete complex tasks. In the software engineering domain, crowdsourced techniques have been adopted to facilitate various tasks, such as design, testing, debugging, development, and so on. Specifically, in crowdsourced testing, crowdsourced workers are given testing tasks to perform and submit their feedback in the form of test reports. One of the key advantages of crowdsourced testing is that it is capable of providing engineers software engineers with domain knowledge and feedback from a large number of real users. Based on diverse software and hardware settings of these users, engineers can bugs that are not caught by traditional quality assurance techniques. Such benefits are particularly ideal for mobile application testing, which needs rapid development-and-deployment iterations and support diverse execution environments. However, crowdsourced testing naturally generates an overwhelming number of crowdsourced test reports, and inspecting such a large number of reports becomes a time-consuming yet inevitable task. This dissertation presents a series of techniques, tools and experiments to assist in crowdsourced report processing. These techniques are designed for improving this task in multiple aspects: 1. prioritizing crowdsourced report to assist engineers in finding as many unique bugs as possible, and as quickly as possible; 2. grouping crowdsourced report to assist engineers in identifying the representative ones in a short time; 3. summarizing the duplicate reports to provide engineers with a concise and accurate understanding of a group of reports; In the first step, I present a text-analysis-based technique to prioritize test reports for manual inspection. This technique leverages two key strategies: (1) a diversity strategy to help developers inspect a wide variety of test reports and to avoid duplicates and wasted effort on falsely classified faulty behavior, and (2) a risk-assessment strategy to help developers identify test reports that may be more likely to be fault-revealing based on past observations.Together, these two strategies form our technique to prioritize test reports in crowdsourced testing. Moreover, in the mobile testing domain, test reports often consist of more screenshots and shorter descriptive text, and thus text-analysis-based techniques may be ineffective or inapplicable. The shortage and ambiguity of natural-language text information and the well-defined screenshots of activity views within mobile applications motivate me to propose a novel technique based on using image understanding for multi-objective test-report prioritization. This technique employs the Spatial Pyramid Matching (SPM) technique to measure the similarity of the screenshots, and apply the natural-language processing technique to measure the distance between the text of test reports. Next, I design and implement CTRAS: a novel approach to leveraging duplicates to enrich the content of bug descriptions and improve the efficiency of inspecting these reports. CTRAS is capable of automatically aggregating duplicates based on both textual information and screenshots, and further summarizes the duplicate test reports into a comprehensive and comprehensible report.I validate all of these techniques on industrial data by collaborating with several companies. The results show my techniques can improve both the efficiency and effectiveness of crowdsourced test report processing. Also, I suggest settings for different usage scenarios and discuss future research directions
Unified platform for M2M Telco Providers
Although many environments are powered by M2M solutions, users do not have a simple way to gather their collective knowledge and program devices’ behaviour. Also, Telco providers still lack proper components for enabling integrated services over their networks. We present the final architecture of the APOLLO project, which delivers a enhanced M2M platform encompassing sensors, management and applications platform for a major Telco provider. APOLLO builds on top of ETSI M2M specifications and rich service execution environments providing easy orchestration of services to end-users
An ANFIS-based compatibility scorecard for IoT integration in websites
Cyber-physical systems and Internet of Things (IoT) form two different levels of the vertical digital integration. Integration of websites with IoT-connected devices has compelled creation of new web design and development strategies where websites are designed keeping in mind the permutations of smart devices. The design should be seamless across different devices and the website design company or web designer should be well informed and aware of the different considerations for design with IoT interactions. In this work, we expound the effectiveness of IoT integration in website design. To realize an IoT-powered IT ecosystem as an essential technology for improving customer experience, a strength–weakness–opportunity–threat analysis is done. Further, with an intent to apprehend the integration support that an existing GUI front end may provide to a smart device, an ANFIS model is proposed to determine the compatibility of an e-commerce website for integration with IoT devices. A dataset of 600 e-commerce websites from.com domain is used to train and test the learning model. Seven features (page loading speed, broken links, browser compatibility, resolution, total size, privacy and security, and interface and typography) which impact the compatibility of IoT integration in websites have been used. Evaluation criteria for assigning score to each feature has been identified. Finally, the compatibility score, the IoTScoresite which evaluates the websites’ integration capabilities and support to IoT devices is generated by adding all the feature scores. The preliminary results generated using the prediction model clearly determine the worthiness of website for IoT integration
- …