3 research outputs found

    INTRUSION DETECTION SYSTEM USING DYNAMIC AGENT SELECTION AND CONFIGURATION

    Get PDF
    Intrusion detection is the process of monitoring the events occurring in a computer system or network and analysing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices. An intrusion detection system (IDS) monitors network traffic and monitors for suspicious activity and alerts the system or network administrator. It identifies unauthorized use, misuse, and abuse of computer systems by both system insiders and external penetrators. Intrusion detection systems (IDS) are essential components in a secure network environment, allowing for early detection of malicious activities and attacks. By employing information provided by IDS, it is possible to apply appropriate countermeasures and mitigate attacks that would otherwise seriously undermine network security. However, Increasing traffic and the necessity of stateful analysis impose strong computational requirements on network intrusion detection systems (NIDS), and motivate the need of architectures with multiple dynamic sensors. In a context of high traffic with heavy tailed characteristics, static rules for dispatching traffic slices among sensors cause severe imbalance. The current high volumes of network traffic overwhelm most IDS techniques requiring new approaches that are able to handle huge volume of log and packet analysis while still maintaining high throughput. This paper shows that the use of dynamic agents has practical advantages for intrusion detection. Our approach features unsupervised adjustment of its configuration and dynamic adaptation to the changing environment, which improvises the performance of IDS significantly. KEYWORDS—Intrusion Detection System, Agent Based IDS, Dynamic Sensor Selection. I

    Delegation to autonomous agents promotes cooperation in collective-risk dilemmas

    Full text link
    Home assistant chat-bots, self-driving cars, drones or automated negotiations are some of the several examples of autonomous (artificial) agents that have pervaded our society. These agents enable the automation of multiple tasks, saving time and (human) effort. However, their presence in social settings raises the need for a better understanding of their effect on social interactions and how they may be used to enhance cooperation towards the public good, instead of hindering it. To this end, we present an experimental study of human delegation to autonomous agents and hybrid human-agent interactions centered on a public goods dilemma shaped by a collective risk. Our aim to understand experimentally whether the presence of autonomous agents has a positive or negative impact on social behaviour, fairness and cooperation in such a dilemma. Our results show that cooperation increases when participants delegate their actions to an artificial agent that plays on their behalf. Yet, this positive effect is reduced when humans interact in hybrid human-agent groups. Finally, we show that humans are biased towards agent behaviour, assuming that they will contribute less to the collective effort
    corecore