58 research outputs found

    ALGORITHMIZATION, REQUIREMENTS ANALYSIS AND ARCHITECTURAL CHALLENGES OF TRACONDA

    Get PDF
    Globally, there are so much information security threats on Internet that even when data is encrypted, there is no guarantee that copy would not be available to third-party, and eventually be decrypted. Thus, trusted routing mechanism that inhibits availability of (encrypted or not) data being transferred to third-party is proposed in this paper. Algorithmization, requirements analysis and architectural challenges for its development are presented

    IP and ATM integration: A New paradigm in multi-service internetworking

    Get PDF
    ATM is a widespread technology adopted by many to support advanced data communication, in particular efficient Internet services provision. The expected challenges of multimedia communication together with the increasing massive utilization of IP-based applications urgently require redesign of networking solutions in terms of both new functionalities and enhanced performance. However, the networking context is affected by so many changes, and to some extent chaotic growth, that any approach based on a structured and complex top-down architecture is unlikely to be applicable. Instead, an approach based on finding out the best match between realistic service requirements and the pragmatic, intelligent use of technical opportunities made available by the product market seems more appropriate. By following this approach, innovations and improvements can be introduced at different times, not necessarily complying with each other according to a coherent overall design. With the aim of pursuing feasible innovations in the different networking aspects, we look at both IP and ATM internetworking in order to investigating a few of the most crucial topics/ issues related to the IP and ATM integration perspective. This research would also address various means of internetworking the Internet Protocol (IP) and Asynchronous Transfer Mode (ATM) with an objective of identifying the best possible means of delivering Quality of Service (QoS) requirements for multi-service applications, exploiting the meritorious features that IP and ATM have to offer. Although IP and ATM often have been viewed as competitors, their complementary strengths and limitations from a natural alliance that combines the best aspects of both the technologies. For instance, one limitation of ATM networks has been the relatively large gap between the speed of the network paths and the control operations needed to configure those data paths to meet changing user needs. IP\u27s greatest strength, on the other hand, is the inherent flexibility and its capacity to adapt rapidly to changing conditions. These complementary strengths and limitations make it natural to combine IP with ATM to obtain the best that each has to offer. Over time many models and architectures have evolved for IP/ATM internetworking and they have impacted the fundamental thinking in internetworking IP and ATM. These technologies, architectures, models and implementations will be reviewed in greater detail in addressing possible issues in integrating these architectures s in a multi-service, enterprise network. The objective being to make recommendations as to the best means of interworking the two in exploiting the salient features of one another to provide a faster, reliable, scalable, robust, QoS aware network in the most economical manner. How IP will be carried over ATM when a commercial worldwide ATM network is deployed is not addressed and the details of such a network still remain in a state of flux to specify anything concrete. Our research findings culminated with a strong recommendation that the best model to adopt, in light of the impending integrated service requirements of future multi-service environments, is an ATM core with IP at the edges to realize the best of both technologies in delivering QoS guarantees in a seamless manner to any node in the enterprise

    A Survey of Satellite Communications System Vulnerabilities

    Get PDF
    The U.S. military’s increasing reliance on commercial and military communications satellites to enable widely-dispersed, mobile forces to communicate makes these space assets increasingly vulnerable to attack by adversaries. Attacks on these satellites could cause military communications to become unavailable at critical moments during a conflict. This research dissected a typical satellite communications system in order to provide an understanding of the possible attacker entry points into the system, to determine the vulnerabilities associated with each of these access points, and to analyze the possible impacts of these vulnerabilities to U.S. military operations. By understanding these vulnerabilities of U.S. communications satellite systems, methods can be developed to mitigate these threats and protect future systems. This research concluded that the satellite antenna is the most vulnerable component of the satellite communications system’s space segment. The antenna makes the satellite vulnerable to intentional attacks such as: RF jamming, spoofing, meaconing, and deliberate physical attack. The most vulnerable Earth segment component was found to be the Earth station network, which incorporates both Earth station and NOC vulnerabilities. Earth segment vulnerabilities include RF jamming, deliberate physical attack, and Internet connection vulnerabilities. The most vulnerable user segment components were found to be the SSPs and PoPs. SSPs are subject to the vulnerabilities of the services offered, the vulnerabilities of Internet connectivity, and the vulnerabilities associated with operating the VSAT central hub. PoPs are susceptible to the vulnerabilities of the PoP routers, the vulnerabilities of Internet and Intranet connectivity, and the vulnerabilities associated with cellular network access

    Block the Root Takeover: Validating Devices Using Blockchain Protocol

    Get PDF
    This study addresses a vulnerability in the trust-based STP protocol that allows malicious users to target an Ethernet LAN with an STP Root-Takeover Attack. This subject is relevant because an STP Root-Takeover attack is a gateway to unauthorized control over the entire network stack of a personal or enterprise network. This study aims to address this problem with a potentially trustless research solution called the STP DApp. The STP DApp is the combination of a kernel /net modification called stpverify and a Hyperledger Fabric blockchain framework in a NodeJS runtime environment in userland. The STP DApp works as an Intrusion Detection System (IPS) by intercepting Ethernet traffic and blocking forged Ethernet frames sent by STP Root-Takeover attackers. This study’s research methodology is a quantitative pre-experimental design that provides conclusive results through empirical data and analysis using experimental control groups. In this study, data collection was based on active RAM utilization and CPU Usage during a performance evaluation of the STP DApp. It blocks an STP Root-Takeover Attack launched by the Yersinia attack tool installed on a virtual machine with the Kali operating system. The research solution is a test blockchain framework using Hyperledger Fabric. It is made up of an experimental test network made up of nodes on a host virtual machine and is used to validate Ethernet frames extracted from stpverify

    A Logically Centralized Approach for Control and Management of Large Computer Networks

    Get PDF
    Management of large enterprise and Internet Service Provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these limitations, the networking research community has been pursuing the vision of simplifying the functional role of a router to its primary task of packet forwarding. This enables centralizing network control at a decision plane where network-wide state can be maintained, and network control can be centrally and consistently enforced. However, scalability and fault-tolerance concerns with physical centralization motivate the need for a more flexible and customizable approach. This dissertation is an attempt at bridging the gap between the extremes of distribution and centralization of network control. We present a logically centralized approach for the design of network decision plane that can be realized by using a set of physically distributed controllers in a network. This approach is aimed at giving network designers the ability to customize the level of control and management centralization according to the scalability, fault-tolerance, and responsiveness requirements of their networks. Our thesis is that logical centralization provides a robust, reliable, and efficient paradigm for management of large networks and we present several contributions to prove this thesis. For network planning, we describe techniques for optimizing the placement of network controllers and provide guidance on the physical design of logically centralized networks. For network operation, algorithms for maintaining dynamic associations between the decision plane and network devices are presented, along with a protocol that allows a set of network controllers to coordinate their decisions, and present a unified interface to the managed network devices. Furthermore, we study the trade-offs in decision plane application design and provide guidance on application state and logic distribution. Finally, we present results of extensive numerical and simulative analysis of the feasibility and performance of our approach. The results show that logical centralization can provide better scalability and fault-tolerance while maintaining performance similarity with traditional distributed approach

    Redundancy and load balancing at IP layer in access and aggregation networks

    Get PDF
    Mobile communications trends are towards the convergence of mobile telephone network and Internet. People usage of mobile telecommunications is evolving to be as on fixed broadband devices. Thus, mobile operators need to evolve their mobile legacy networks, in order to support new services and offer similar availability and reliability than the rest of Internet. The emergence of all-IP standards, like Long Term Evolution, is pushing this evolution to its final step. The challenging and highly variable access and aggregation networks are the scope of such improvements. The thesis presents in detail different methods for increasing availability on high-end switches, analyzing its strengths and weaknesses. It finally evaluates the implementation of an enhanced VRRP as a solution for high availability testing then feature on a real network

    Static Web content distribution and request routing in a P2P overlay

    Get PDF
    The significance of collaboration over the Internet has become a corner-stone of modern computing, as the essence of information processing and content management has shifted to networked and Webbased systems. As a result, the effective and reliable access to networked resources has become a critical commodity in any modern infrastructure. In order to cope with the limitations introduced by the traditional client-server networking model, most of the popular Web-based services have employed separate Content Delivery Networks (CDN) to distribute the server-side resource consumption. Since the Web applications are often latency-critical, the CDNs are additionally being adopted for optimizing the content delivery latencies perceived by the Web clients. Because of the prevalent connection model, the Web content delivery has grown to a notable industry. The rapid growth in the amount of mobile devices further contributes to the amount of resources required from the originating server, as the content is also accessible on the go. While the Web has become one of the most utilized sources of information and digital content, the openness of the Internet is simultaneously being reduced by organizations and governments preventing access to any undesired resources. The access to information may be regulated or altered to suit any political interests or organizational benefits, thus conflicting with the initial design principle of an unrestricted and independent information network. This thesis contributes to the development of more efficient and open Internet by combining a feasibility study and a preliminary design of a peer-to-peer based Web content distribution and request routing mechanism. The suggested design addresses both the challenges related to effectiveness of current client-server networking model and the openness of information distributed over the Internet. Based on the properties of existing peer-to-peer implementations, the suggested overlay design is intended to provide low-latency access to any Web content without sacrificing the end-user privacy. The overlay is additionally designed to increase the cost of censorship by forcing a successful blockade to isolate the censored network from the rest of the Internet
    • …
    corecore