81 research outputs found
TAPCHA: An Invisible CAPTCHA Scheme
TAPCHA is a universal CAPTCHA scheme designed for touch-enabled smart devices such as
smartphones, tablets and smartwatches. The main difference between TAPCHA and other
CAPTCHA schemes is that TAPCHA retains its security by making the CAPTCHA test ‘invisible’ for
the bot. It then utilises context effects to maintain the readability of the instruction for human users
which eventually guarantees the usability of the scheme. Two reference designs, namely TAPCHA
SHAPE & SHADE and TAPCHA MULTI are developed to demonstrate the use of this scheme
BlogForever: D2.5 Weblog Spam Filtering Report and Associated Methodology
This report is written as a first attempt to define the BlogForever spam detection strategy. It comprises a survey of weblog spam technology and approaches to their detection. While the report was written to help identify possible approaches to spam detection as a component within the BlogForver software, the discussion has been extended to include observations related to the historical, social and practical value of spam, and proposals of other ways of dealing with spam within the repository without necessarily removing them. It contains a general overview of spam types, ready-made anti-spam APIs available for weblogs, possible methods that have been suggested for preventing the introduction of spam into a blog, and research related to spam focusing on those that appear in the weblog context, concluding in a proposal for a spam detection workflow that might form the basis for the spam detection component of the BlogForever software
Using machine learning to identify common flaws in CAPTCHA design: FunCAPTCHA case analysis
Human Interactive Proofs (HIPs 1 or CAPTCHAs 2) have become a first-level security measure on the Internet to avoid automatic attacks or minimize their effects. All the most widespread, successful or interesting CAPTCHA designs put to scrutiny have been successfully broken. Many of these attacks have been side-channel attacks. New designs are proposed to tackle these security problems while improving the human interface. FunCAPTCHA is the first commercial implementation of a gender classification CAPTCHA, with reported improvements in conversion rates. This article finds weaknesses in the security of FunCAPTCHA and uses simple machine learning (ML) analysis to test them. It shows a side-channel attack that leverages these flaws and successfully solves FunCAPTCHA on 90% of occasions without using meaningful image analysis. This simple yet effective security analysis can be applied with minor modifications to other HIPs proposals, allowing to check whether they leak enough information that would in turn allow for simple side-channel attacks
PROCESS OPTIMIZATION AND AUTOMATION IN E-COMMERCE BUSINESS OPERATION
Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales.
I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way.
I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way.Mister Sandman is an ecommerce start-up company located in the heart of Berlin, Germany. It is an online mattress & bedding company, selling products both on their own as well as 17 other marketplaces across Europe. I have successfully completed 6 months of my internship with the company. It was really an amazing experience here to work and learn. This journey has been very informative, interesting, and important on all scales.
I was entrusted with various projects and tasks and actively worked on data collection, cleaning, manipulation, preprocessing, visualization, analysis, and automation of various tasks in the company's ecommerce platform and marketplaces. In the beginning, I was trained to understand the end to end working mechanism of day to day operations. My goals and areas of contribution were precisely put forward to me which empowered me with focus and clear vision. I then utilized my knowledge from the university and past experience of work at Amazon to support them in an efficient way.
I made an analysis on Pricing, Rebate, Shipping, Ratings & Reviews, Inventories, Visibility, Orders & Sales and worked on to optimize and automate the process using various techniques of python skills. I also learnt and used other technical skills and languages to execute the tasks along with Python such as SQL, Macros, Tableau and Power BI tools depending on the requirements. I also make different reports for the orders & sales - weekly and monthly using various analysis and visualization tools and contributed to understand the development and improvement areas for the business to grow and continue serving our customers the best way
A case study of the robustness and the usability of CAPTCHA
The websites and network application experienced explosive growth in the past two decades. As the evolution of smartphones and mobile communication network have evolved, smart phone s user experience has been improved to a high level, and more and more people prefer to use smartphones. However, the development of techniques will not only increase the users experience but also bring threats of cracking. The development of techniques brought the potential threats to websites security. As a result, CAPTCHA, Completely Automated Public Turing test to tell Computers and Humans Apart, forms one of the methods to impede spamming attacks.
As CAPTCHA s definition indicates, CAPTCHA should be recognized by humans easily while shouldn t be recognized computers. These two attributes of CAPTCHA can be considered as usability and robustness. Some CAPTCHA is difficult to be recognized by computers, but humans may also find difficult to recognize it. Therefore, the purpose of the thesis is to find out the balance between usability and robustness of CAPTCHA. Therefore, the related researches about the usability and the robustness of CAPTCHA will be reviewed, and the process of automatic CAPTCHA recognition will be Figured out and implemented by the author. The implementation will be based on the existed algorithms and a case study.
The findings are the factors for improving CAPTCHA s robustness. They are from the each step of a specific process of automatic CAPTCHA recognition. Then the factors will be compared with the issues which are from the related usability research. The discussion will derive some possible ways, such as adding confusing characters and increasing data s diversity to improve robustness while keeping the usability according to the derived factors
Addressing the new generation of spam (Spam 2.0) through Web usage models
New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem
Selected Computing Research Papers Volume 1 June 2012
An Evaluation of Anti-phishing Solutions (Arinze Bona Umeaku) ..................................... 1
A Detailed Analysis of Current Biometric Research Aimed at Improving Online Authentication Systems (Daniel Brown) .............................................................................. 7
An Evaluation of Current Intrusion Detection Systems Research
(Gavin Alexander Burns) .................................................................................................... 13
An Analysis of Current Research on Quantum Key Distribution (Mark Lorraine) ............ 19
A Critical Review of Current Distributed Denial of Service Prevention Methodologies (Paul Mains) ............................................................................................... 29
An Evaluation of Current Computing Methodologies Aimed at Improving the Prevention of SQL Injection Attacks in Web Based Applications (Niall Marsh) .............. 39
An Evaluation of Proposals to Detect Cheating in Multiplayer Online Games (Bradley Peacock) ............................................................................................................... 45
An Empirical Study of Security Techniques Used In Online Banking
(Rajinder D G Singh) .......................................................................................................... 51
A Critical Study on Proposed Firewall Implementation Methods in Modern Networks (Loghin Tivig) .................................................................................................... 5
Hardening Tor Hidden Services
Tor is an overlay anonymization network that provides anonymity for clients surfing the web but also allows hosting anonymous services called hidden services. These enable whistleblowers and political activists to express their opinion and resist censorship. Administrating a hidden service is not trivial and requires extensive knowledge because Tor uses a comprehensive protocol and relies on volunteers. Meanwhile, attackers can spend significant resources to decloak them. This thesis aims to improve the security of hidden services by providing practical guidelines and a theoretical architecture. First, vulnerabilities specific to hidden services are analyzed by conducting an academic literature review. To model realistic real-world attackers, court documents are analyzed to determine their procedures. Both literature reviews classify the identified vulnerabilities into general categories.
Afterward, a risk assessment process is introduced, and existing risks for hidden services and their operators are determined. The main contributions of this thesis are practical guidelines for hidden service operators and a theoretical architecture. The former provides operators with a good overview of practices to mitigate attacks. The latter is a comprehensive infrastructure that significantly increases the security of hidden services and alleviates problems in the Tor protocol. Afterward, limitations and the transfer into practice are analyzed. Finally, future research possibilities are determined
Recommended from our members
MapReduce based RDF assisted distributed SVM for high throughput spam filtering
This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityElectronic mail has become cast and embedded in our everyday lives. Billions of legitimate emails are sent on a daily basis. The widely established underlying infrastructure, its widespread availability as well as its ease of use have all acted as catalysts to such pervasive proliferation. Unfortunately, the same can be alleged about unsolicited bulk email, or rather spam. Various methods, as well as enabling architectures are available to try to mitigate spam permeation. In this respect, this dissertation compliments existing survey work in this area by contributing an extensive literature review of traditional and emerging spam filtering approaches. Techniques, approaches and architectures employed for spam filtering are appraised, critically assessing respective strengths and weaknesses.
Velocity, volume and variety are key characteristics of the spam challenge. MapReduce (M/R) has become increasingly popular as an Internet scale, data intensive processing platform. In the context of machine learning based spam filter training, support vector machine (SVM) based techniques have been proven effective. SVM training is however a computationally intensive process. In this dissertation, a M/R based distributed SVM algorithm for scalable spam filter training, designated MRSMO, is presented. By distributing and processing subsets of the training data across multiple participating computing nodes, the distributed SVM reduces spam filter training time significantly. To mitigate the accuracy degradation introduced by the adopted approach, a Resource Description Framework (RDF) based feedback loop is evaluated. Experimental results demonstrate that this improves the accuracy levels of the distributed SVM beyond the original sequential counterpart.
Effectively exploiting large scale, ‘Cloud’ based, heterogeneous processing capabilities for M/R in what can be considered a non-deterministic environment requires the consideration of a number of perspectives. In this work, gSched, a Hadoop M/R based, heterogeneous aware task to node matching and allocation scheme is designed. Using MRSMO as a baseline, experimental evaluation indicates that gSched improves on the performance of the out-of-the box Hadoop counterpart in a typical Cloud based infrastructure.
The focal contribution to knowledge is a scalable, heterogeneous infrastructure and machine learning based spam filtering scheme, able to capitalize on collaborative accuracy improvements through RDF based, end user feedback. MapReduce based RDF Assisted Distributed SVM for High Throughput Spam Filterin
- …