18,211 research outputs found

    Mitigation Model for DDoS Attack in Wireless Sensor Networks

    Get PDF
    A Denial-of-Service is an attack in which the attackers send certain messages to the target systems or target servers with a purpose and intention of shutting down those system or servers. Those messages cause such an impact to the victim that it makes its servicesunavailable or not responding for the users. When a DoS attack is implemented in large number, then it is referred to as a DDoS or Distributed enial-f-Service attack. In this,the attackers uses a large number of controlled bots called zombies and reflectors which are the innocent computers exploited to generate the attack. There are various kinds of DDoS attacks which depletes network bandwidth as well as its resources. We have particularly focused upon flooding kind of attacks. In this server’s capacity is exploited by sending huge number of unwanted requests with a purpose of failure of server’s processing efficiency. Since there is a limit to number of packet requests a server can effectively process. If that limit is exceeded, servers performance gets egraded. In this thesis, we have followed an approach for mitigating DoS/DDoS attack based on the Rate Limiting algorithm, used to mitigate flooding resulting to the attack applied at the server-side. Packet filtering has been done on the basis of legitimate TTL values of the incoming ackets followed by the ordering of packets to be sent to the server. Ordering of packets is performed with two approaches, one with the existing FCFS approach and other Priority queue approach and the server performance is compared. The implementation is carried out on the simulation tool MATLAB. The results show that there is considerable decrease in the two host and network based performance metrics that are Packet drop and Response time under DoS and DDoS attacks. When only legitimate packets are passed to the server after packet filtering, response time and throughput improves and after packet scheduling it even gets better

    Programming with process groups: Group and multicast semantics

    Get PDF
    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects

    Blazes: Coordination Analysis for Distributed Programs

    Full text link
    Distributed consistency is perhaps the most discussed topic in distributed systems today. Coordination protocols can ensure consistency, but in practice they cause undesirable performance unless used judiciously. Scalable distributed architectures avoid coordination whenever possible, but under-coordinated systems can exhibit behavioral anomalies under fault, which are often extremely difficult to debug. This raises significant challenges for distributed system architects and developers. In this paper we present Blazes, a cross-platform program analysis framework that (a) identifies program locations that require coordination to ensure consistent executions, and (b) automatically synthesizes application-specific coordination code that can significantly outperform general-purpose techniques. We present two case studies, one using annotated programs in the Twitter Storm system, and another using the Bloom declarative language.Comment: Updated to include additional materials from the original technical report: derivation rules, output stream label

    Rethinking State-Machine Replication for Parallelism

    Full text link
    State-machine replication, a fundamental approach to designing fault-tolerant services, requires commands to be executed in the same order by all replicas. Moreover, command execution must be deterministic: each replica must produce the same output upon executing the same sequence of commands. These requirements usually result in single-threaded replicas, which hinders service performance. This paper introduces Parallel State-Machine Replication (P-SMR), a new approach to parallelism in state-machine replication. P-SMR scales better than previous proposals since no component plays a centralizing role in the execution of independent commands---those that can be executed concurrently, as defined by the service. The paper introduces P-SMR, describes a "commodified architecture" to implement it, and compares its performance to other proposals using a key-value store and a networked file system
    corecore