131 research outputs found

    Detection of DDoS attacks in Windows Communication Foundation Services

    Get PDF
    Internet provides many critical services so it has become very important to monitor the network traffic so that the resources of the network can be prevented from being depleted from malicious hackers. In this paper, we have presented a mechanism to detect and defense a web-server against a Distributed Denial of Service (DDoS) attack. We have presented simulation of specific kind of DDoS attack i.e. identity spoofing and SYN flood attack on an application similar to shopping portal and its results to demonstrate the effectiveness of the mechanism. Then, the attack is monitored in resource monitor of the server side monitor showing CPU utilization.Also,some defense mechanisms to defend the server against such attacks has been presented. DOI: 10.17762/ijritcc2321-8169.15029

    Leveraging Software-Defined Networking and Virtualization for a One-to-One Client-Server Model

    Get PDF
    Modern computer networks allow server resources to be shared. While this multiplexing is the unsung hero of scalability and performance, the fact that clients are sharing resources and each client’s network traffic is transmitted in a larger pool of the total network traffic, poses distinct challenges for security. By adopting multiplexing so broadly, the networking and systems communities have implicitly favored performance over security. When servers multiplexing clients are compromised, the attack is able to spread by exploiting unsuspecting clients sharing the resource. Drive-by-downloads are an example of an attack where a Web server is compromised and begins distributing malware to connecting clients. As a result of using today’s many-to-one client-server network model, current approaches are inadequate at protecting the network and its resources. We propose a redesign of the modern network infrastructure. Our approach involves moving from the current many-to-one client-server model to a one-to-one client-server model. In redesigning the network, we provide a means of better accountability for traffic between clients and servers. With accountability, we enable the ability to quickly determine which client is responsible for an attack. This allows us to quickly repair the affected entities. To accomplish this accountability, we separate each client’s communication into separate flows. A flow is identified by various network features, such as IP addresses and ports. Further, instead of allowing multiple clients to be multiplexed at the same server, we use a technique that allows each client to communicate with a server that is logically separate from all other clients. Accordingly, a server compromise only effects a single client. We create a one-to-one client-server model using virtualization techniques and OpenFlow, a software-defined network (SDN) protocol. We complete our model in three phases. In the first, we deploy a physical SDN using physical machines and a commodity network switch that supports OpenFlow to gain an initial understanding of SDNs. The next phase involves implementation of Choreographer, a DNS access control mechanism, in a virtualized SDN environment for better scalability over our physical configuration. Finally, we leverage Choreographer to dynamically instantiate a server for each client and create network flows that allow a client to reach the requested server

    Investigations into micromobility issues in IP networks

    Get PDF
    Master'sMASTER OF SCIENC

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    Anonymity, hacking and cloud computing forensic challenges

    Get PDF
    Cloud Computing is rising and becomes more complex with the daily addition of new technologies. Huge amounts of data transits through the Cloud networks. In the case of a cyber-attack, it can be difficult to analyze every single aspect of the Cloud. Legal challenges also exist due to the local positioning of Cloud servers. This research paper aims to alleviate the challenges in Cloud computing forensics and to sensitize businesses and governments to several solutions. The results of this research are relevant to cyber forensic analysts but also to network administrators and can be used during the preliminary stages of a Cloud computing environment creation. A complete test has been created using ethical hacking tools and cyber forensics to understand the steps of an investigation in a single service that could be implemented in a Cloud. The paper goes on to present frameworks that have been developed in order to maintain integrity and repetition. In the end, it is legal aspects and shortcomings in the technical structure implementation that represent the Cloud computing forensics’ main challenges

    Network Traffic Measurements, Applications to Internet Services and Security

    Get PDF
    The Internet has become along the years a pervasive network interconnecting billions of users and is now playing the role of collector for a multitude of tasks, ranging from professional activities to personal interactions. From a technical standpoint, novel architectures, e.g., cloud-based services and content delivery networks, innovative devices, e.g., smartphones and connected wearables, and security threats, e.g., DDoS attacks, are posing new challenges in understanding network dynamics. In such complex scenario, network measurements play a central role to guide traffic management, improve network design, and evaluate application requirements. In addition, increasing importance is devoted to the quality of experience provided to final users, which requires thorough investigations on both the transport network and the design of Internet services. In this thesis, we stress the importance of users’ centrality by focusing on the traffic they exchange with the network. To do so, we design methodologies complementing passive and active measurements, as well as post-processing techniques belonging to the machine learning and statistics domains. Traffic exchanged by Internet users can be classified in three macro-groups: (i) Outbound, produced by users’ devices and pushed to the network; (ii) unsolicited, part of malicious attacks threatening users’ security; and (iii) inbound, directed to users’ devices and retrieved from remote servers. For each of the above categories, we address specific research topics consisting in the benchmarking of personal cloud storage services, the automatic identification of Internet threats, and the assessment of quality of experience in the Web domain, respectively. Results comprise several contributions in the scope of each research topic. In short, they shed light on (i) the interplay among design choices of cloud storage services, which severely impact the performance provided to end users; (ii) the feasibility of designing a general purpose classifier to detect malicious attacks, without chasing threat specificities; and (iii) the relevance of appropriate means to evaluate the perceived quality of Web pages delivery, strengthening the need of users’ feedbacks for a factual assessment

    The InfoSec Handbook

    Get PDF
    Computer scienc

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations
    corecore