2 research outputs found

    Even Turing Should Sometimes Not Be Able To Tell: Mimicking Humanoid Usage Behavior for Exploratory Studies of Online Services

    Get PDF
    Online services such as social networks, online shops, and search engines deliver different content to users depending on their location, browsing history, or client device. Since these services have a major influence on opinion forming, understanding their behavior from a social science perspective is of greatest importance. In addition, technical aspects of services such as security or privacy are becoming more and more relevant for users, providers, and researchers. Due to the lack of essential data sets, automatic black box testing of online services is currently the only way for researchers to investigate these services in a methodical and reproducible manner. However, automatic black box testing of online services is difficult since many of them try to detect and block automated requests to prevent bots from accessing them. In this paper, we introduce a testing tool that allows researchers to create and automatically run experiments for exploratory studies of online services. The testing tool performs programmed user interactions in such a manner that it can hardly be distinguished from a human user. To evaluate our tool, we conducted - among other things - a large-scale research study on Risk-based Authentication (RBA), which required human-like behavior from the client. We were able to circumvent the bot detection of the investigated online services with the experiments. As this demonstrates the potential of the presented testing tool, it remains to the responsibility of its users to balance the conflicting interests between researchers and service providers as well as to check whether their research programs remain undetected

    Testing Cyber Security with Simulated Humans

    No full text
    Human error is one of the most common causes of vul- nerability in a secure system. However it is often overlooked when these systems are tested, partly because human tests are costly and very hard to repeat. We have developed a community of agents that test secure systems by running standard windows software while performing collaborative group tasks, mimicking more realistic patterns of communication and traffic, as well as human fatigue and errors. This system is being deployed on a large cyber testing range. One key attribute of humans is flexibility of response in order to achieve their goals when unexpected events occur. Our agents use reactive planning within a BDI architecture to flexibly replan if needed. Since the agents are goal-oriented, we are able to measure the impact of cyber attacks on mission accomplishment, a more salient measure of protection than raw penetration. We show experimentally how the agent teams can be resilient under attacks that are partly successful, and also how an organizational structure can lead to emergent properties of the traffic in the network
    corecore