81 research outputs found

    TAPCHA: An Invisible CAPTCHA Scheme

    Get PDF
    TAPCHA is a universal CAPTCHA scheme designed for touch-enabled smart devices such as smartphones, tablets and smartwatches. The main difference between TAPCHA and other CAPTCHA schemes is that TAPCHA retains its security by making the CAPTCHA test ‘invisible’ for the bot. It then utilises context effects to maintain the readability of the instruction for human users which eventually guarantees the usability of the scheme. Two reference designs, namely TAPCHA SHAPE & SHADE and TAPCHA MULTI are developed to demonstrate the use of this scheme

    BlogForever: D2.5 Weblog Spam Filtering Report and Associated Methodology

    Get PDF
    This report is written as a first attempt to define the BlogForever spam detection strategy. It comprises a survey of weblog spam technology and approaches to their detection. While the report was written to help identify possible approaches to spam detection as a component within the BlogForver software, the discussion has been extended to include observations related to the historical, social and practical value of spam, and proposals of other ways of dealing with spam within the repository without necessarily removing them. It contains a general overview of spam types, ready-made anti-spam APIs available for weblogs, possible methods that have been suggested for preventing the introduction of spam into a blog, and research related to spam focusing on those that appear in the weblog context, concluding in a proposal for a spam detection workflow that might form the basis for the spam detection component of the BlogForever software

    Using machine learning to identify common flaws in CAPTCHA design: FunCAPTCHA case analysis

    Get PDF
    Human Interactive Proofs (HIPs 1 or CAPTCHAs 2) have become a first-level security measure on the Internet to avoid automatic attacks or minimize their effects. All the most widespread, successful or interesting CAPTCHA designs put to scrutiny have been successfully broken. Many of these attacks have been side-channel attacks. New designs are proposed to tackle these security problems while improving the human interface. FunCAPTCHA is the first commercial implementation of a gender classification CAPTCHA, with reported improvements in conversion rates. This article finds weaknesses in the security of FunCAPTCHA and uses simple machine learning (ML) analysis to test them. It shows a side-channel attack that leverages these flaws and successfully solves FunCAPTCHA on 90% of occasions without using meaningful image analysis. This simple yet effective security analysis can be applied with minor modifications to other HIPs proposals, allowing to check whether they leak enough information that would in turn allow for simple side-channel attacks

    PROCESS OPTIMIZATION AND AUTOMATION IN E-COMMERCE BUSINESS OPERATION

    Get PDF
    Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales. I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way. I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way.Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales. I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way. I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way

    A case study of the robustness and the usability of CAPTCHA

    Get PDF
    The websites and network application experienced explosive growth in the past two decades. As the evolution of smartphones and mobile communication network have evolved, smart phone s user experience has been improved to a high level, and more and more people prefer to use smartphones. However, the development of techniques will not only increase the users experience but also bring threats of cracking. The development of techniques brought the potential threats to websites security. As a result, CAPTCHA, Completely Automated Public Turing test to tell Computers and Humans Apart, forms one of the methods to impede spamming attacks. As CAPTCHA s definition indicates, CAPTCHA should be recognized by humans easily while shouldn t be recognized computers. These two attributes of CAPTCHA can be considered as usability and robustness. Some CAPTCHA is difficult to be recognized by computers, but humans may also find difficult to recognize it. Therefore, the purpose of the thesis is to find out the balance between usability and robustness of CAPTCHA. Therefore, the related researches about the usability and the robustness of CAPTCHA will be reviewed, and the process of automatic CAPTCHA recognition will be Figured out and implemented by the author. The implementation will be based on the existed algorithms and a case study. The findings are the factors for improving CAPTCHA s robustness. They are from the each step of a specific process of automatic CAPTCHA recognition. Then the factors will be compared with the issues which are from the related usability research. The discussion will derive some possible ways, such as adding confusing characters and increasing data s diversity to improve robustness while keeping the usability according to the derived factors

    Addressing the new generation of spam (Spam 2.0) through Web usage models

    Get PDF
    New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem

    Selected Computing Research Papers Volume 1 June 2012

    Get PDF
    An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1 A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7 An Evaluation of Current Intrusion Detection Systems Research (Gavin Alexander Burns) .................................................................................................... 13 An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19 A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29 An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39 An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45 An Empirical Study of Security Techniques Used In Online Banking (Rajinder D G Singh) .......................................................................................................... 51 A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5

    Hardening Tor Hidden Services

    Get PDF
    Tor is an overlay anonymization network that provides anonymity for clients surfing the web but also allows hosting anonymous services called hidden services. These enable whistleblowers and political activists to express their opinion and resist censorship. Administrating a hidden service is not trivial and requires extensive knowledge because Tor uses a comprehensive protocol and relies on volunteers. Meanwhile, attackers can spend significant resources to decloak them. This thesis aims to improve the security of hidden services by providing practical guidelines and a theoretical architecture. First, vulnerabilities specific to hidden services are analyzed by conducting an academic literature review. To model realistic real-world attackers, court documents are analyzed to determine their procedures. Both literature reviews classify the identified vulnerabilities into general categories. Afterward, a risk assessment process is introduced, and existing risks for hidden services and their operators are determined. The main contributions of this thesis are practical guidelines for hidden service operators and a theoretical architecture. The former provides operators with a good overview of practices to mitigate attacks. The latter is a comprehensive infrastructure that significantly increases the security of hidden services and alleviates problems in the Tor protocol. Afterward, limitations and the transfer into practice are analyzed. Finally, future research possibilities are determined
    corecore