30 research outputs found

    A system to secure websites and educate students about cyber security through crowdsourcing

    Get PDF
    Startups are innovative companies who have ideas for the betterment of the society. But, due to limited resources, and highly expensive testing procedures, they invest less time and money in securing their website and web applications. Furthermore, cyber security education lacks integrating practical knowledge with educational theoretical materials. Recognizing, the need to educate both startups and students about cyber security, this report presents Secure Startup - a novel system, that aims to provide startups with a platform to protect their website in a costeffective manner, while educating students about the real-world cyber skills. This system finds potential security problems in startup websites and provides them with effective solutions through a crowdtesting framework. Secure Startup, crowdsources the testers (security experts and students) of this system, through social media platforms, using Twitter Bots. The basic idea behind this report, is to understand, if such a system can help students learn the necessary cyber skills, while running successful tests and generating quality results for the startups. The results presented in this report show that, this system has a higher learning rate, and a higher task effectiveness rate, which helps in detecting and remediating maximum possible vulnerabilities. These results were generated after analyzing the performance of the testers and the learning capabilities of students, based on their feedback, trainings and task performance. These results have been promising in pursuing the system\u27s value which lays in enhancing the security of a startup website and providing a new approach for practical cyber security education

    A data-driven game theoretic strategy for developers in software crowdsourcing: a case study

    Get PDF
    Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward

    Given Enough Eyeballs, all Bugs are Shallow - A Literature Review for the Use of Crowdsourcing in Software Testing

    Get PDF
    Over the last years, the use of crowdsourcing has gained a lot of attention in the domain of software engineering. One key aspect of software development is the testing of software. Literature suggests that crowdsourced software testing (CST) is a reliable and feasible tool for manifold kinds of testing. Research in CST made great strides; however, it is mostly unstructured and not linked to traditional software testing practice and terminology. By conducting a literature review of traditional and crowdsourced software testing literature, this paper delivers two major contributions. First, it synthesizes the fields of crowdsourcing research and traditional software testing. Second, the paper gives a comprehensive overview over findings in CST-research and provides a classification into different software testing types

    Thesis title: Crowdsourced Testing Approach For Mobile Compatibility Testing

    Get PDF
    The frequent release of mobile devices and operating system versions bring several compatibility issues to mobile applications. This thesis addresses fragmentation-induced compatibility issues. The thesis comprises three main phases. The first of these involves an in-depth review of relevant literature that identifies the main challenges of existing compatibility testing approaches. The second phase reflects on the conduction of an in-depth exploratory study on Android/iOS developers in academia and industry to gain further insight into their actual needs in testing environments whilst gauging their willingness to work with public testers with varied experience. The third phase relates to implementing a new manual crowdtesting approach that supports large-scale distribution of tests and execution by public testers and real users on a larger number of devices in a short time. The approach is designed based on a direct crowdtesting workflow to bridge the communication gap between developers and testers. The approach supports performing the three dimensions of compatibility testing. This approach helps explore different behaviours of the app and the users of the app to identify all compatibility issues. Two empirical evaluation studies were conducted on iOS/Android developers and testers to gauge developers' and testers' perspectives regarding the benefits, satisfaction, and effectiveness of the proposed approach. Our findings show that the approach is effective and improves on current state-of-the-art approaches. The findings also show that the approach met the several unmet needs of different groups of developers and testers. The evaluation proved that the different groups of developers and testers were satisfied with the approach. Importantly, the level of satisfaction was especially high in small and medium-sized enterprises that have limited access to traditional testing infrastructures, which are instead present in large enterprises. This is the first research that provides insights for future research into the actual needs of each group of developers and testers

    Crowdsourced network measurements: Benefits and best practices

    Get PDF
    Network measurements are of high importance both for the operation of networks and for the design and evaluation of new management mechanisms. Therefore, several approaches exist for running network measurements, ranging from analyzing live traffic traces from campus or Internet Service Provider (ISP) networks to performing active measurements on distributed testbeds, e.g., PlanetLab, or involving volunteers. However, each method falls short, offering only a partial view of the network. For instance, the scope of passive traffic traces is limited to an ISP’s network and customers’ habits, whereas active measurements might be biased by the population or node location involved. To complement these techniques, we propose to use (commercial) crowdsourcing platforms for network measurements. They permit a controllable, diverse and realistic view of the Internet and provide better control than do measurements with voluntary participants. In this study, we compare crowdsourcing with traditional measurement techniques, describe possible pitfalls and limitations, and present best practices to overcome these issues. The contribution of this paper is a guideline for researchers to understand when and how to exploit crowdsourcing for network measurements

    Deepfake detection: humans vs. machines

    Full text link
    Deepfake videos, where a person's face is automatically swapped with a face of someone else, are becoming easier to generate with more realistic results. In response to the threat such manipulations can pose to our trust in video evidence, several large datasets of deepfake videos and many methods to detect them were proposed recently. However, it is still unclear how realistic deepfake videos are for an average person and whether the algorithms are significantly better than humans at detecting them. In this paper, we present a subjective study conducted in a crowdsourcing-like scenario, which systematically evaluates how hard it is for humans to see if the video is deepfake or not. For the evaluation, we used 120 different videos (60 deepfakes and 60 originals) manually pre-selected from the Facebook deepfake database, which was provided in the Kaggle's Deepfake Detection Challenge 2020. For each video, a simple question: "Is face of the person in the video real of fake?" was answered on average by 19 na\"ive subjects. The results of the subjective evaluation were compared with the performance of two different state of the art deepfake detection methods, based on Xception and EfficientNets (B4 variant) neural networks, which were pre-trained on two other large public databases: the Google's subset from FaceForensics++ and the recent Celeb-DF dataset. The evaluation demonstrates that while the human perception is very different from the perception of a machine, both successfully but in different ways are fooled by deepfakes. Specifically, algorithms struggle to detect those deepfake videos, which human subjects found to be very easy to spot

    Large-Scale Study of Perceptual Video Quality

    Get PDF
    The great variations of videographic skills, camera designs, compression and processing protocols, and displays lead to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. This is true in part because available video quality assessment databases contain very limited content, fixed resolutions, were captured using a small number of camera devices by a few videographers and have been subjected to a modest number of distortions. As such, these databases fail to adequately represent real world videos, which contain very different kinds of content obtained under highly diverse imaging conditions and are subject to authentic, often commingled distortions that are impossible to simulate. As a result, NR video quality predictors tested on real-world video data often perform poorly. Towards advancing NR video quality prediction, we constructed a large-scale video quality assessment database containing 585 videos of unique content, captured by a large number of users, with wide ranges of levels of complex, authentic distortions. We collected a large number of subjective video quality scores via crowdsourcing. A total of 4776 unique participants took part in the study, yielding more than 205000 opinion scores, resulting in an average of 240 recorded human opinions per video. We demonstrate the value of the new resource, which we call the LIVE Video Quality Challenge Database (LIVE-VQC), by conducting a comparison of leading NR video quality predictors on it. This study is the largest video quality assessment study ever conducted along several key dimensions: number of unique contents, capture devices, distortion types and combinations of distortions, study participants, and recorded subjective scores. The database is available for download on this link: http://live.ece.utexas.edu/research/LIVEVQC/index.html
    corecore