177 research outputs found

    AMP: A Science-driven Web-based Application for the TeraGrid

    Full text link
    The Asteroseismic Modeling Portal (AMP) provides a web-based interface for astronomers to run and view simulations that derive the properties of Sun-like stars from observations of their pulsation frequencies. In this paper, we describe the architecture and implementation of AMP, highlighting the lightweight design principles and tools used to produce a functional fully-custom web-based science application in less than a year. Targeted as a TeraGrid science gateway, AMP's architecture and implementation are intended to simplify its orchestration of TeraGrid computational resources. AMP's web-based interface was developed as a traditional standalone database-backed web application using the Python-based Django web development framework, allowing us to leverage the Django framework's capabilities while cleanly separating the user interface development from the grid interface development. We have found this combination of tools flexible and effective for rapid gateway development and deployment.Comment: 7 pages, 2 figures, in Proceedings of the 5th Grid Computing Environments Worksho

    Hyp3rArmor: reducing web application exposure to automated attacks

    Full text link
    Web applications (webapps) are subjected constantly to automated, opportunistic attacks from autonomous robots (bots) engaged in reconnaissance to discover victims that may be vulnerable to specific exploits. This is a typical behavior found in botnet recruitment, worm propagation, largescale fingerprinting and vulnerability scanners. Most anti-bot techniques are deployed at the application layer, thus leaving the network stack of the webapp’s server exposed. In this paper we present a mechanism called Hyp3rArmor, that addresses this vulnerability by minimizing the webapp’s attack surface exposed to automated opportunistic attackers, for JavaScriptenabled web browser clients. Our solution uses port knocking to eliminate the webapp’s visible network footprint. Clients of the webapp are directed to a visible static web server to obtain JavaScript that authenticates the client to the webapp server (using port knocking) before making any requests to the webapp. Our implementation of Hyp3rArmor, which is compatible with all webapp architectures, has been deployed and used to defend single and multi-page websites on the Internet for 114 days. During this time period the static web server observed 964 attempted attacks that were deflected from the webapp, which was only accessed by authenticated clients. Our evaluation shows that in most cases client-side overheads were negligible and that server-side overheads were minimal. Hyp3rArmor is ideal for critical systems and legacy applications that must be accessible on the Internet. Additionally Hyp3rArmor is composable with other security tools, adding an additional layer to a defense in depth approach.This work has been supported by the National Science Foundation (NSF) awards #1430145, #1414119, and #1012798

    INCORPORATING PRIVACY AND SECURITY FEATURES IN AN OPEN SOURCE SEARCH ENGINE A Project Report Presented to

    Get PDF
    The aim of this project was to explore and implement various privacy and security features in an open-source search engine and enhance the security and privacy capabilities of Yioop. Yioop, an open-source PHP search engine based on GPLv3 license, is designed and developed by Dr. Chris Pollett. We have enabled a crawl, search and index mechanism for hidden services by execution of codes, which has facilitated access of the Tor network in Yioop. We have diversified the ability of the previously supported text CAPTCHA functionality in Yioop by implementing hash CAPTCHA and provided feasibility to toggle between text CAPTCHA and hash CAPTCHA. To enable the user to log in to his or her respective Yioop account without sharing the password over the network, we have incorporated zero knowledge authentications in which Yioop does not store the user’s real password, but it stores the numerical password, which is derived from the user’s original password

    Enhancing Web Browsing Security

    Get PDF
    Web browsing has become an integral part of our lives, and we use browsers to perform many important activities almost everyday and everywhere. However, due to the vulnerabilities in Web browsers and Web applications and also due to Web users\u27 lack of security knowledge, browser-based attacks are rampant over the Internet and have caused substantial damage to both Web users and service providers. Enhancing Web browsing security is therefore of great need and importance.;This dissertation concentrates on enhancing the Web browsing security through exploring and experimenting with new approaches and software systems. Specifically, we have systematically studied four challenging Web browsing security problems: HTTP cookie management, phishing, insecure JavaScript practices, and browsing on untrusted public computers. We have proposed new approaches to address these problems, and built unique systems to validate our approaches.;To manage HTTP cookies, we have proposed an approach to automatically validate the usefulness of HTTP cookies at the client-side on behalf of users. By automatically removing useless cookies, our approach helps a user to strike an appropriate balance between maximizing usability and minimizing security risks. to protect against phishing attacks, we have proposed an approach to transparently feed a relatively large number of bogus credentials into a suspected phishing site. Using those bogus credentials, our approach conceals victims\u27 real credentials and enables a legitimate website to identify stolen credentials in a timely manner. to identify insecure JavaScript practices, we have proposed an execution-based measurement approach and performed a large-scale measurement study. Our work sheds light on the insecure JavaScript practices and especially reveals the severity and nature of insecure JavaScript inclusion and dynamic generation practices on the Web. to achieve secure and convenient Web browsing on untrusted public computers, we have proposed a simple approach that enables an extended browser on a mobile device and a regular browser on a public computer to collaboratively support a Web session. A user can securely perform sensitive interactions on the mobile device and conveniently perform other browsing interactions on the public computer

    Drag and Drop Image CAPTCHA

    Get PDF
    The massive and automated access to Web resources through robots has made it essential for Web service providers to make some conclusion about whether a user is human or robot. A Human Interaction Proof (HIP) like Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) offers a way to make such a distinction. CAPTCHA is essentially a modern implementation of Turing test, which carries out its job through a particular text based, image based or audio based challenge response system. In this paper we present a new image based CAPTCHA technique. Properties of the proposed technique offer all of the benefits of image based CAPTCHAs; grant an improved security control over the usual text based techniques and at the same time improve the user-friendliness of the Web page. Further, the paper briefly reviews various other existing CAPTCHA techniques

    Addressing the new generation of spam (Spam 2.0) through Web usage models

    Get PDF
    New Internet collaborative media introduce new ways of communicating that are not immune to abuse. A fake eye-catching profile in social networking websites, a promotional review, a response to a thread in online forums with unsolicited content or a manipulated Wiki page, are examples of new the generation of spam on the web, referred to as Web 2.0 Spam or Spam 2.0. Spam 2.0 is defined as the propagation of unsolicited, anonymous, mass content to infiltrate legitimate Web 2.0 applications.The current literature does not address Spam 2.0 in depth and the outcome of efforts to date are inadequate. The aim of this research is to formalise a definition for Spam 2.0 and provide Spam 2.0 filtering solutions. Early-detection, extendibility, robustness and adaptability are key factors in the design of the proposed method.This dissertation provides a comprehensive survey of the state-of-the-art web spam and Spam 2.0 filtering methods to highlight the unresolved issues and open problems, while at the same time effectively capturing the knowledge in the domain of spam filtering.This dissertation proposes three solutions in the area of Spam 2.0 filtering including: (1) characterising and profiling Spam 2.0, (2) Early-Detection based Spam 2.0 Filtering (EDSF) approach, and (3) On-the-Fly Spam 2.0 Filtering (OFSF) approach. All the proposed solutions are tested against real-world datasets and their performance is compared with that of existing Spam 2.0 filtering methods.This work has coined the term ‘Spam 2.0’, provided insight into the nature of Spam 2.0, and proposed filtering mechanisms to address this new and rapidly evolving problem

    Interbank money transfer system for PC and mobile

    Get PDF
    Applied project submitted to the Department of Computer Science, Ashesi University College, in partial fulfillment of Bachelor of Science degree in Computer Science, [April] 2010"Money makes the world go round". The importance of money has made it a very powerful tool man cannot live without. Thus, it is very important for people to have money when they need it. The mobile phone has developed drastically from a device just for making and receiving calls to one that can be used to take pictures, send emails, connect to the internet and do many other things that at one point in time were dedicated to specific devices such as computers. Mobile phone users in Ghana have increased over the years and are still increasing with some people having more than one phone. The introduction of mobile phones that have the capability of accessing the internet has brought rise to applications developed solely for the mobile platform and also websites that are built to fit the mobile specifications such as screen size, page size, size of images, and processing power

    A New Heuristic Based Phishing Detection Approach Utilizing Selenium Webdriver

    Get PDF
    Õngitsemine on oluline probleem, mis hõlmab endas petlike meilide ja veebilehtede kasutamist, tüssates pahaaimamatuid kasutajad vabatahtlikult avaldama konfidentsiaalset informatsiooni. Antud uurimustöö põhifookuseks on avastada õngitsemise veebilehti, mis kasutavad identifitseerimiseks meili ja salasõna, et pääseda ligi personaalsele või piiratud sisule. Töös esitletakse SeleniumPhishGuard-i rakenduse kasutusmugavust ning analüüsitakse selle uudse heuristilise lähenemisega programmi võimalusi ja tulemusi õngitsemise lehekülgede tuvastamisel. Esmalt hinnatakse ning diskuteeritakse olemasolevate parimate tehnoloogiliste lahenduste ning meetodite üle, mis kasutavad sarnast heuristikat. Selles magistritöös on kasutatud metoodikat, mis identifitseerib võltsveebilehed, sisestades vormi vigased andmed ning analüüsides saadud vastust. Lisaks serverist saadud andmevahetusele pakume metoodikat, mis määrab veebilehe legitiimsuse teiste põhimõtete järgi. Rakendus on realiseeritud Pythoni programmeerimiskeeles kasutades Selenium veebi testimise raamatukogu. Sellest tulenevalt on ka programmi nimes viidatud Seleniumile. Rakenduse testimiseks on kasutatud Alexa top 500 ja Phistank andmebaase. Kõiki sisselogimise vormiga veebilehti Alexa 500 ja Phistank andmebaasides töödeldi ja analüüsiti kasutades antud rakendust. Rakendus töötab kõikide identifitseerimistehnoloogiatega, mis põhinevad isikuandmete vahendamisel. Praegune prototüüp on välja töötatud lehtedele, mis toetavad nii HTTP kui ka HTTPS audentimist ning aktsepteerivad isikuandmetena meili ja parooli. Algoritm on välja töötatud iseseisva moodulina ning tulevikus on võimalik seda integreerida veebilehitseja lisana läbi API. Lisaks olemasolevale metoodikale on hinnatud ja uuritud erinevate URL analüüside tehnikaid, mida kasutati vale positiivse info vähendamiseks ning soorituse parandamiseks. Katsetused näitasid, et SeleniumPhishGuard rakendus on hiilgav tööriist avastamaks õngitsemise vorme. Rakendus suutis tuvastada ligikaudu 96% sisselogimisega õngitsemislehtedest.Phishing is a nontrivial problem involving deceptive emails and webpages that trick unsuspecting users into willingly revealing their confidential information. In this paper, we focus on detecting login phishing pages, pages that contain forms with email and password fields to allow for authorization to personal/restricted content. We present the design, implementation,and evaluation of our phishing detection tool “SeleniumPhishGuard”, a novel heuristic-based approach to detect phishing login pages. First, the finest existing technologies or techniques that have used similar heuristics we will be discussed and evaluated. The methodology introduced in our paper identifies fraudulent websites by submitting incorrect credentials and analyzing the response. We have also proposed a mechanism for analyzing the responses from server against the submissions of all those credentials to determine thelegitimacy of a given website. The application was implemented in python programming language by utilizing Selenium web testing library, hence “Selenium” is used in the name of our tool. To test the application, a dataset from Alexa top 500 and Phishtank was collected.All pages with login forms from the Alexa 500 and Phishtank were analyzed. The application works with any authentication technologies which are based on exchange of credentials. Our current prototype is developed for sites supporting both HTTP and HTTPS authentication and accepting email and password pair as login credential. Our algorithm is developed as a separate module which in future can be integrated with browser pluginsthrough an API. We also discuss the design and evaluation of several URL analysis techniques we utilized to reduce false positives and improve the overall performance. Our experiments show that SeleniumPhishGuard is excellent at detecting login phishing forms, correctly classifying approximately 96% of login phishing pages

    Using Visual Analytics to Discover Bot Traffic

    Get PDF
    With the advance of technology, the Internet has become a medium tool used for many malicious activities. The presence of bot traffic has increased greatly that causes significant problems for businesses and organisations, such as spam bots, scraper bots, distributed denial of service bots and adaptive bots that aim to exploit the vulnerabilities of a website. Discriminating bot traffic against legitimate flash crowds remains an open challenge to date.In order to address the above issues and enhance security awareness, this thesis proposes an interactive visual analytics system for discovering bot traffic. The system provides an interactive visualisation, with details on demand capabilities, which enables knowledge discovery from very large datasets. It enables an analyst to understand comprehensive details without being constrained by large datasets. The system has a dashboard view to represent legitimate and bot traffic by adopting Quadtree data structure and Voronoi diagrams. The main contribution of this thesis is a novel visual analytics system that is capable of discovering bot traffic.This research conducted a literature review in order to gain systematic understanding of the research area. Furthermore, the research was conducted by utilising experiment and simulation approaches. The experiment was conducted by capturing website traffic, identifying browser fingerprints, simulating bot attacks and analysing mouse dynamics, such as movements and events, of participants. Data were captured as the participants performed a list of tasks, such as responding to the banner. The data collection is transparent to the participants and only requires JavaScript to be activated on the client side. This study involved 10 participants who are familiar with the Internet. To analyse the data, Weka 3.6.10 was used to perform classification based on a training dataset. The test dataset of all participants was evaluated using a built-in decision tree algorithm. The results of classifying the test dataset were promising, and the model was able to identify ten participants and six simulated bot attacks with an accuracy of 86.67%. Finally, the visual analytics design was formulated in order to assist an analyst to discover bot presence
    corecore