105 research outputs found

    Technological Impediments to B2C Electronic Commerce: An Update

    Get PDF
    In 1999, Rose et al. identified six categories of technological impediments inhibiting the growth of electronic commerce: (1) download delays, (2) interface limitations, (3) search problems, (4) inadequate measures of Web application success, (5) security, and (6) a lack of Internet standards. This paper updates findings in the original paper by surveying the practitioner literature for the five-year period from June 1999 to June 2004. We identify how advances in technology both partially resolve concerns with the original technological impediments, and inhibit their full resolution. We find that, despite five years of technological progress, the six categories of technological impediments remain relevant. Furthermore, the maturation of e-Commerce increased the Internet\u27s complexity, making these impediments harder to address. Two kinds of complexity are especially relevant: evolutionary complexity, and skill complexity. Evolutionary complexity refers to the need to preserve the existing Internet and resolve impediments simultaneously. Unfortunately, because the Internet consists of multiple incompatible technologies, philosophies, and attitudes, additions to the Internet infrastructure are difficult to integrate. Skill complexity refers to the skill sets necessary for managing e-Commerce change. As the Internet evolves, more skills become relevant. Unfortunately, individuals, companies and organizations are unable to master and integrate all necessary skills. As a result, new features added to the Internet do not consider all relevant factors, and are thus sub-optimal. NOTE THAT THIS ARTICLE IS APPROXIMATELY 600kb. IF YOU USE A SLOW MODEM, IT MAY TAKE A WHILE TO LOA

    A Malware Analysis and Artifact Capture Tool

    Get PDF
    Malware authors attempt to obfuscate and hide their execution objectives in their program’s static and dynamic states. This paper provides a novel approach to aid analysis by introducing a malware analysis tool which is quick to set up and use with respect to other existing tools. The tool allows for the intercepting and capturing of malware artifacts while providing dynamic control of process flow. Capturing malware artifacts allows an analyst to more quickly and comprehensively understand malware behavior and obfuscation techniques and doing so interactively allows multiple code paths to be explored. The faster that malware can be analyzed the quicker the systems and data compromised by it can be determined and its infection stopped. This research proposes an instantiation of an interactive malware analysis and artifact capture tool

    Enhancing Web Browsing Security

    Get PDF
    Web browsing has become an integral part of our lives, and we use browsers to perform many important activities almost everyday and everywhere. However, due to the vulnerabilities in Web browsers and Web applications and also due to Web users\u27 lack of security knowledge, browser-based attacks are rampant over the Internet and have caused substantial damage to both Web users and service providers. Enhancing Web browsing security is therefore of great need and importance.;This dissertation concentrates on enhancing the Web browsing security through exploring and experimenting with new approaches and software systems. Specifically, we have systematically studied four challenging Web browsing security problems: HTTP cookie management, phishing, insecure JavaScript practices, and browsing on untrusted public computers. We have proposed new approaches to address these problems, and built unique systems to validate our approaches.;To manage HTTP cookies, we have proposed an approach to automatically validate the usefulness of HTTP cookies at the client-side on behalf of users. By automatically removing useless cookies, our approach helps a user to strike an appropriate balance between maximizing usability and minimizing security risks. to protect against phishing attacks, we have proposed an approach to transparently feed a relatively large number of bogus credentials into a suspected phishing site. Using those bogus credentials, our approach conceals victims\u27 real credentials and enables a legitimate website to identify stolen credentials in a timely manner. to identify insecure JavaScript practices, we have proposed an execution-based measurement approach and performed a large-scale measurement study. Our work sheds light on the insecure JavaScript practices and especially reveals the severity and nature of insecure JavaScript inclusion and dynamic generation practices on the Web. to achieve secure and convenient Web browsing on untrusted public computers, we have proposed a simple approach that enables an extended browser on a mobile device and a regular browser on a public computer to collaboratively support a Web session. A user can securely perform sensitive interactions on the mobile device and conveniently perform other browsing interactions on the public computer

    Untangling the Web: A Guide To Internet Research

    Get PDF
    [Excerpt] Untangling the Web for 2007 is the twelfth edition of a book that started as a small handout. After more than a decade of researching, reading about, using, and trying to understand the Internet, I have come to accept that it is indeed a Sisyphean task. Sometimes I feel that all I can do is to push the rock up to the top of that virtual hill, then stand back and watch as it rolls down again. The Internet—in all its glory of information and misinformation—is for all practical purposes limitless, which of course means we can never know it all, see it all, understand it all, or even imagine all it is and will be. The more we know about the Internet, the more acute is our awareness of what we do not know. The Internet emphasizes the depth of our ignorance because our knowledge can only be finite, while our ignorance must necessarily be infinite. My hope is that Untangling the Web will add to our knowledge of the Internet and the world while recognizing that the rock will always roll back down the hill at the end of the day

    Framework For Modeling Attacker Capabilities with Deception

    Get PDF
    In this research we built a custom experimental range using opensource emulated and custom pure honeypots designed to detect or capture attacker activity. The focus is to test the effectiveness of a deception in its ability to evade detection coupled with attacker skill levels. The range consists of three zones accessible via virtual private networking. The first zone houses varying configurations of opensource emulated honeypots, custom built pure honeypots, and real SSH servers. The second zone acts as a point of presence for attackers. The third zone is for administration and monitoring. Using the range, both a control and participant-based experiment were conducted. We conducted control experiments to baseline and empirically explore honeypot detectability amongst other systems through adversarial testing. We executed a series of tests such as network service sweep, enumeration scanning, and finally manual execution. We also selected participants to serve as cyber attackers against the experiment range of varying skills having unique tactics, techniques and procedures in attempting to detect the honeypots. We have concluded the experiments and performed data analysis. We measure the anticipated threat by presenting the Attacker Bias Perception Profile model. Using this model, each participant is ranked based on their overall threat classification and impact. This model is applied to the results of the participants which helps align the threat to likelihood and impact of a honeypot being detected. The results indicate the pure honeypots are significantly difficult to detect. Emulated honeypots are grouped in different categories based on the detection and skills of the attackers. We developed a framework abstracting the deceptive process, the interaction with system elements, the use of intelligence, and the relationship with attackers. The framework is illustrated by our experiment case studies and the attacker actions, the effects on the system, and impact to the success

    ECHO Information sharing models

    Get PDF
    As part of the ECHO project, the Early Warning System (EWS) is one of four technologies under development. The E-EWS will provide the capability to share information to provide up to date information to all constituents involved in the E-EWS. The development of the E-EWS will be rooted in a comprehensive review of information sharing and trust models from within the cyber domain as well as models from other domains
    • …
    corecore