3,604 research outputs found

    Information Sharing Solutions for Nato Headquarters

    Get PDF
    NATO is an Alliance of 26 nations that operates on a consensus basis, not a majority basis. Thorough and timely information exchange between nations is fundamental to the Business Process. Current technology and practices at NATO HQ are inadequate to meet modern-day requirements despite the availability of demonstrated and accredited Cross-Domain technology solutions. This lack of integration between networks is getting more complicated with time, as nations continue to invest in IT and ignore the requirements for inter-networked gateways. This contributes to inefficiencies, fostering an atmosphere where shortcuts are taken in order to get the job done. The author recommends that NATO HQ should improve its presence on the Internet, building on the desired tenets of availability and security

    Security Policy Management for a Cooperative Firewall

    Get PDF
    Increasing popularity of the Internet service and increased number of connected devices along with the introduction of IoT are making the society ever more dependent on the Internet services availability. Therefore, we need to ensure the minimum level of security and reliability of services. Ultra-Reliable Communication (URC) refers to the availability of life and business critical services nearly 100 percent of the time. These requirements are an integral part of upcoming 5th generation (5G) mobile networks. 5G is the future mobile network, which at the same time is part of the future Internet. As an extension to the conventional communication architecture, 5G needs to provide ultra-high reliability of services where; it needs to perform better than the currently available solutions in terms of security, confidentiality, integrity and reliability and it should mitigate the risks of Internet attack and malicious activities. To achieve such requirements, Customer Edge Switching (CES) architecture is presented. It proposes that the Internet user’s agent in the network provider needs to have prior information about the expected traffic of users to mitigate maximum attacks and only allow expected communication between hosts. CES executes communication security policies of each user or device acting as the user’s agent. The policy describes with fine granularity what traffic is expected by the device. The policies are sourced as automatically as possible but can also be modified by the user. Stored policies will follow the mobile user and will be executed at the network edge node executing Customer Edge Switch functions to stop all unexpected traffic from entering the mobile network. State-of-the-art in mobile network architectures utilizes the Quality of Service (QoS) policies of users. This thesis motivates the extension of current architecture to accommodate security and communication policy of end-users. The thesis presents an experimental implementation of a policy management system which is termed as Security Policy Management (SPM) to handle above-mentioned policies of users. We describe the architecture, implementation and integration of SPM with the Customer Edge Switching. Additionally, SPM has been evaluated in terms of performance, scalability, reliability and security offered via 5G customer edge nodes. Finally, the system has been analyzed for feasibility in the 5G architecture

    Toward Third Generation Internet Desktop Grids

    Get PDF
    Projects like SETI@home and Folding@home have popularized Internet Desktop Grid (IDG) computing. The first generation of IDG projects scalled to millions of participatings but was dedicated to a specific application. BOINC, United Device and XtremWeb belong to a second generation of IDG platforms. Their architecture was designed to accommodate many applications but has drawbacks like limited security and a centralized architecture. In this paper we present a new design for Internet Desktop Grid, following a layered approach. The new architecture establishes an overlay network, giving the participating nodes direct communication capabilities. From that basis many key mechanisms of IDG can be implemented using existing cluster tools and extra IDG specificic software. As a proof of concept, we run a bioinformatic application on a third generation IDG, based on a connectivity service (PVC), an existing job scheduler (Condor), a high performance data transport service (Bittorent) and a custom result certification mechanism

    Support for collaborative component-based software engineering

    Get PDF
    Collaborative system composition during design has been poorly supported by traditional CASE tools (which have usually concentrated on supporting individual projects) and almost exclusively focused on static composition. Little support for maintaining large distributed collections of heterogeneous software components across a number of projects has been developed. The CoDEEDS project addresses the collaborative determination, elaboration, and evolution of design spaces that describe both static and dynamic compositions of software components from sources such as component libraries, software service directories, and reuse repositories. The GENESIS project has focussed, in the development of OSCAR, on the creation and maintenance of large software artefact repositories. The most recent extensions are explicitly addressing the provision of cross-project global views of large software collections and historical views of individual artefacts within a collection. The long-term benefits of such support can only be realised if OSCAR and CoDEEDS are widely adopted and steps to facilitate this are described. This book continues to provide a forum, which a recent book, Software Evolution with UML and XML, started, where expert insights are presented on the subject. In that book, initial efforts were made to link together three current phenomena: software evolution, UML, and XML. In this book, focus will be on the practical side of linking them, that is, how UML and XML and their related methods/tools can assist software evolution in practice. Considering that nowadays software starts evolving before it is delivered, an apparent feature for software evolution is that it happens over all stages and over all aspects. Therefore, all possible techniques should be explored. This book explores techniques based on UML/XML and a combination of them with other techniques (i.e., over all techniques from theory to tools). Software evolution happens at all stages. Chapters in this book describe that software evolution issues present at stages of software architecturing, modeling/specifying, assessing, coding, validating, design recovering, program understanding, and reusing. Software evolution happens in all aspects. Chapters in this book illustrate that software evolution issues are involved in Web application, embedded system, software repository, component-based development, object model, development environment, software metrics, UML use case diagram, system model, Legacy system, safety critical system, user interface, software reuse, evolution management, and variability modeling. Software evolution needs to be facilitated with all possible techniques. Chapters in this book demonstrate techniques, such as formal methods, program transformation, empirical study, tool development, standardisation, visualisation, to control system changes to meet organisational and business objectives in a cost-effective way. On the journey of the grand challenge posed by software evolution, the journey that we have to make, the contributory authors of this book have already made further advances

    Sharing Computer Network Logs for Security and Privacy: A Motivation for New Methodologies of Anonymization

    Full text link
    Logs are one of the most fundamental resources to any security professional. It is widely recognized by the government and industry that it is both beneficial and desirable to share logs for the purpose of security research. However, the sharing is not happening or not to the degree or magnitude that is desired. Organizations are reluctant to share logs because of the risk of exposing sensitive information to potential attackers. We believe this reluctance remains high because current anonymization techniques are weak and one-size-fits-all--or better put, one size tries to fit all. We must develop standards and make anonymization available at varying levels, striking a balance between privacy and utility. Organizations have different needs and trust other organizations to different degrees. They must be able to map multiple anonymization levels with defined risks to the trust levels they share with (would-be) receivers. It is not until there are industry standards for multiple levels of anonymization that we will be able to move forward and achieve the goal of widespread sharing of logs for security researchers.Comment: 17 pages, 1 figur

    Collaborative enforcement of firewall policies in virtual private networks

    Full text link
    The widely deployed Virtual Private Network (VPN) tech-nology allows roaming users to build an encrypted tunnel to a VPN server, which henceforth allows roaming users to access some resources as if that computer is residing on their home organization’s network. Although the VPN technol-ogy is very useful, it imposes security threats to the remote network because their firewall does not know what traffic is flowing inside the VPN tunnel. To address this issue, we pro-pose VGuard, a framework that allows a policy owner and a request owner to collaboratively determine whether the re-quest satisfies the policy without the policy owner knowing the request and the request owner knowing the policy. We first present an efficient protocol, called Xhash, for oblivious comparison, which allows two parties, where each party has a number, to compare whether they have the same num-ber, without disclosing their numbers to each other. Then, we present the VGuard framework that uses Xhash as the basic building block. The basic idea of VGuard is to first convert a firewall policy to non-overlapping numerical rules and then use Xhash to check whether a request matches a rule. Comparing with the Cross-Domain Cooperative Fire-wall (CDCF) framework, which represents the state-of-the-art, VGuard is not only more secure but also orders of mag-nitude more efficient. On real-life firewall policies, for pro-cessing packets, our experimental results show that VGuard is 552 times faster than CDCF on one party and 5035 times faster than CDCF on the other party

    Ms Pac-Man versus Ghost Team CEC 2011 competition

    Get PDF
    Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. © 2011 IEEE
    • …
    corecore