8 research outputs found

    Intrusion Detection and Countermeasure of Virtual Cloud Systems - State of the Art and Current Challenges

    Get PDF
    Clouds are distributed Internet-based platforms that provide highly resilient and scalable environments to be used by enterprises in a multitude of ways. Cloud computing offers enterprises technology innovation that business leaders and IT infrastructure managers can choose to apply based on how and to what extent it helps them fulfil their business requirements. It is crucial that all technical consultants have a rigorous understanding of the ramifications of cloud computing as its influence is likely to spread the complete IT landscape. Security is one of the major concerns that is of practical interest to decision makers when they are making critical strategic operational decisions. Distributed Denial of Service (DDoS) attacks are becoming more frequent and effective over the past few years, since the widely publicised DDoS attacks on the financial services industry that came to light in September and October 2012 and resurfaced in the past two years. In this paper, we introduce advanced cloud security technologies and practices as a series of concepts and technology architectures, from an industry-centric point of view. This is followed by classification of intrusion detection and prevention mechanisms that can be part of an overall strategy to help understand identify and mitigate potential DDoS attacks on business networks. The paper establishes solid coverage of security issues related to DDoS and virtualisation with a focus on structure, clarity, and well-defined blocks for mainstream cloud computing security solutions and platforms. In doing so, we aim to provide industry technologists, who may not be necessarily cloud or security experts, with an effective tool to help them understand the security implications associated with cloud adoption in their transition towards more knowledge-based systems

    Swarm intelligence–based energy efficient clustering with multihop routing protocol for sustainable wireless sensor networks

    Get PDF
    © The Author(s) 2020. Wireless sensor network is a hot research topic with massive applications in different domains. Generally, wireless sensor network comprises hundreds to thousands of sensor nodes, which communicate with one another by the use of radio signals. Some of the challenges exist in the design of wireless sensor network are restricted computation power, storage, battery and transmission bandwidth. To resolve these issues, clustering and routing processes have been presented. Clustering and routing processes are considered as an optimization problem in wireless sensor network which can be resolved by the use of swarm intelligence–based approaches. This article presents a novel swarm intelligence–based clustering and multihop routing protocol for wireless sensor network. Initially, improved particle swarm optimization technique is applied for choosing the cluster heads and organizes the clusters proficiently. Then, the grey wolf optimization algorithm–based routing process takes place to select the optimal paths in the network. The presented improved particle swarm optimization–grey wolf optimization approach incorporates the benefits of both the clustering and routing processes which leads to maximum energy efficiency and network lifetime. The proposed model is simulated under an extension set of experimentation, and the results are validated under several measures. The obtained experimental outcome demonstrated the superior characteristics of the improved particle swarm optimization–grey wolf optimization technique under all the test cases

    Unmanned Ground Vehicle for Data Collection in Wireless Sensor Networks: Mobility-aware Sink Selection

    Get PDF
    Several recent studies have demonstrated the benefits of using the Wireless Sensor Network (WSN) technology in large-scale monitoring applications, such as planetary exploration and battlefield surveillance. Sensor nodes generate continuous stream of data, which must be processed and delivered to end users in a timely manner. This is a very challenging task due to constraints in sensor node’s hardware resources. Mobile Unmanned Ground Vehicles (UGV) has been put forward as a solution to increase network lifetime and to improve system's Quality of Service (QoS). UGV are mobile devices that can move closer to data sources to reduce the bridging distance to the sink. They gather and process sensory data before they transmit it over a long-range communication technology. In large-scale monitored physical environments, the deployment of multiple-UGV is essential to deliver consistent QoS across different parts of the network. However, data sink mobility causes intermittent connectivity and high re-connection overhead, which may introduce considerable data delivery delay. Consequently, frequent network reconfigurations in multiple data sink networks must be managed in an effective way. In this paper, we contribute an algorithm to allow nodes to choose between multiple available UGVs, with the primary objective of reducing the network reconfiguration and signalling overhead. This is realised by assigning each node to the mobile sink that offers the longest connectivity time. The proposed algorithm takes into account the UGV’s mobility parameters, including its movement direction and velocity, to achieve longer connectivity period. Experimental results show that the proposed algorithm can reduce end-to-end delay and improve packet delivery ratio, while maintaining low sink discovery and handover overhead. When compared to its best rivals in the literature, the proposed approach improves the packet delivery ratio by up to 22%, end-to-end delay by up to 28%, energy consumption by up to 58%, and doubles the network lifetime

    End-To-End Loss Based TCP Congestion Control Mechanism as a Secured Communication Technology for Smart Healthcare Enterprises

    Get PDF
    Many smart healthcare centers are deploying long distance, high bandwidth networks in their computer network infrastructure and operation. Transmission control protocol (TCP) is responsible for reliable and secure communication of data in these medial infrastructure networks. TCP is reliable and secure due to its congestion control mechanism, which is responsible for detecting and reacting to the congestion in the network. Many TCP congestion control mechanisms have been developed previously for different operating systems. TCP CUBIC, TCP Compound, and TCP Fusion are the default congestion control mechanism in Linux, Microsoft Windows, and Sun Solaris operating systems, respectively. The earliest congestion control mechanism Standard TCP acts as the trademark congestion control mechanism. The exponential growth of congestion window ( cwnd ) in slow start phase of the TCP CUBIC causes burst losses of packets, and TCP flows did not share available link bandwidth fairly. The prime aim of this paper is to enhance the performance of TCP CUBIC for long distance, high bandwidth secured networks to achieve better performance in medical infrastructure, concerning packet loss rate, protocol fairness, and convergence time. In this paper, congestion control module for slow start is proposed, which reduces the effect of the exponential growth of cwnd by designing the new limits of cwnd size in slow start phase, which in turn decreases the packet loss rate in healthcare networks. NS-2 is used to simulate the experiments of enhanced TCP CUBIC and state-of-The-Art congestion control mechanisms. Results show that the performance of enhanced TCP CUBIC outperforms by 18% as compared with the state-of-The-Art congestion control mechanisms

    Grid evolution.

    No full text

    Temporal dimension for job submission description language.

    No full text
    corecore