3 research outputs found

    ANALYTICAL MODELS FOR THE INTERACTION BETWEEN BOTMASTERS AND HONEYPOTS

    Get PDF
    Honeypots are traps designed to resemble easy-to-compromise computer systems in order to tempt attackers to invade them. When attackers target a honeypot, all their actions, tools and techniques are recorded and analyzed in order to help security professionals in their conflict against the attackers and the botmasters. However, botmasters might be able to detect honeypots. In particular, they can command compromised machines to perform illicit actions in which the targeted victims work as sensors that measure the machine's willingness to perform these actions. If honeypots were designed to completely ignore these commands, then they can be easily detected by botmasters. On the other hand, full participation by honeypots in such activities has its associated costs and may lead to legal liabilities. This raises the need for finding the optimal response strategy needed by honeypots in order to prolong their stay within botnets without exposing them to liability. In this work, we show that current honeypot architectures and operation limitations may allow botmasters to uncover honeypots in their botnet. In particular, we show how botmasters can systematically collect, combine and analyze evidence about the true nature of the machines they compromise using Dempster-Shafer theory. To determine the currently available optimal response for honeypots, we provide a Bayesian game theoretic framework that models the interaction between honeypots and botmasters as a non-zero-sum noncooperative game with uncertainty. However, the solution of the game shows that botmasters always have the upper hand in the conflict with honeypots since botmasters can update their belief about the true nature of the opponents and consequently act optimally based on the new belief value. This motivated us to investigate a better strategy that enables honeypots to maximize their outcome by optimally responding to the probes of the botmasters. In particular, we provide a Markov Decision Processes model that helps security professionals to determine the optimal strategy that enables the honeypots to prolong their stay in the botnets while minimizing the cost of possible legal liability. Throughout this thesis, we also provide different scenarios that illustrate and support our proposed analysis and solutions

    Dempster-Shafer Evidence Combining for (Anti)-Honeypot Technologies

    Get PDF
    Honeypots are network surveillance architectures designed to resemble easy-to-compromise computer systems. They are deployed to trap hackers in order to help security professionals capture, control, and analyze malicious Internet attacks and other activities of hackers. A botnet is an army of compromised computers controlled by a bot herder and used for illicit financial gain. Botnets have become quite popular in recent Internet attacks. Since honeypots have been deployed in many defense systems, attackers constructing and maintaining botnets are forced to find ways to avoid honeypot traps. In fact, some researchers have even suggested equipping normal machines by misleading evidence so that they appear as honeypots in order to scare away rational attackers. In this paper, we address some aspects related to the problem of honeypot detection by botmasters. In particular, we show that current honeypot architectures and operation limitations may allow attackers to systematically collect, combine, and analyze evidence about the true nature of the machines they compromise. In particular, we show how a systematic technique for evidence combining such as Dempster-Shafer theory can allow botmasters to determine the true nature of compromised machines with a relatively high certainty. The obtained results demonstrate inherent limitations of current honeypot designs. We also aim to draw the attention of security professionals to work on enhancing the discussed features of honeypots in order to prevent them from being abused by botmasters

    Illegal Roaming and File Manipulation on Target Computers: Assessing the Effect of Sanction Threats on System Trespassers’ Online Behaviors

    No full text
    Research Summary: The results of previous research indicate that the presentation of deterring situational stimuli in an attacked computing environment shapes system trespassers’ avoiding online behaviors during the progression of a system trespassing event. Nevertheless, none of these studies comprised an investigation of whether the effect of deterring cues influence system trespassers’ activities on the system. Moreover, no prior research has been aimed at exploring whether the effect of deterring cues is consistent across different types of system trespassers. We examine whether the effect of situational deterring cues in an attacked computer system influenced the likelihood of system trespassers engaging in active online behaviors on an attacked system, and whether this effect varies based on different levels of administrative privileges taken by system trespassers. By using data from a randomized experiment, we find that a situational deterring cue reduced the probability of system trespassers with fewer privileges on the attacked computer system (nonadministrative users) to enter activity commands. In contrast, the presence of these cues in the attacked system did not affect the probability of system trespassers with the highest level of privileges (administrative users) to enter these commands. Policy Implications: In developing policies to curtail malicious online behavior committed by system trespassers, a “one-policy-fits-all” approach is often employed by information technology (IT) teams to protect their organizations. Our results suggest that although the use of a warning banner is effective in reducing the amount of harmful commands entered into a computer system by nonadministrative users, such a policy is ineffective in deterring trespassers who take over a network with administrative privileges. Accordingly, it is important to recognize that the effectiveness of deterring stimuli in cyberspace is largely dependent on the level of administrative privileges taken by the system trespasser when breaking into the system. These findings present the need for the development and implementation of flexible policies in deterring system trespassers
    corecore