46 research outputs found

    Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales

    Full text link
    The main part of this book, Elements of Linear Accelerators, outlines in Part 1 a framework for non-relativistic linear accelerator focusing and accelerating channel design, simulation, optimization and analysis where space charge is an important factor. Part 1 is the most important part of the book; grasping the framework is essential to fully understand and appreciate the elements within it, and the myriad application details of the following Parts. The treatment concentrates on all linacs, large or small, intended for high-intensity, very low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ) is especially developed as a representative and the most complicated linac form (from dc to bunched and accelerated beam), extending to practical design of long, high energy linacs, including space charge resonances and beam halo formation, and some challenges for future work. Also a practical method is presented for designing Alternating-Phase- Focused (APF) linacs with long sequences and high energy gain. Full open-source software is available. The following part, Calm in the Resonances and Other Tales, contains eyewitness accounts of nearly 60 years of participation in accelerator technology. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in the figures. (September 2023) The LINACS codes are released at no cost and, as always,with fully open-source coding. (p.2 & Ch 19.10

    Jornadas Nacionales de Investigación en Ciberseguridad: actas de las VIII Jornadas Nacionales de Investigación en ciberseguridad: Vigo, 21 a 23 de junio de 2023

    Get PDF
    Jornadas Nacionales de Investigación en Ciberseguridad (8ª. 2023. Vigo)atlanTTicAMTEGA: Axencia para a modernización tecnolóxica de GaliciaINCIBE: Instituto Nacional de Cibersegurida

    Quantitative Verification and Synthesis of Resilient Networks

    Get PDF

    Security Implications of Insecure DNS Usage in the Internet

    Get PDF
    The Domain Name System (DNS) provides domain-to-address lookup-services used by almost all internet applications. Because of this ubiquitous use of the DNS, attacks against the DNS have become more and more critical. However, in the past, studies of DNS security have been mostly conducted against individual protocols and applications. In this thesis, we perform the first comprehensive evaluation of DNS-based attacks against a wide range of internet applications, ranging from time-synchronisation via NTP over internet resource management to security mechanisms. We show how to attack those applications by exploiting various weaknesses in the DNS. These attacks are based on both, already known weaknesses which are adapted to new attacks, as well as previously unknown attack vectors which have been found during the course of this thesis. We evaluate our attacks and provide the first taxonomy of DNS applications, to show how adversaries can systematically develop attacks exploiting the DNS. We analyze the attack surface created by our attacks in the internet and find that a significant number of applications and systems can be attacked. We work together with the developers of the vulnerable applications to develop patches and general countermeasures which can be applied by various parties to block our attacks. We also provide conceptual insights into the root causes allowing our attacks to help with the development of new applications and standards. The findings of this thesis are published in in 4 full-paper publications and 2 posters at international academic conferences. Additionally, we disclose our finding to developers which has lead to the registration of 8 Common Vulnerabilities and Exposures identifiers (CVE IDs) and patches in 10 software implementations. To raise awareness, we also presented our findings at several community meetings and via invited articles

    IMPROVING NETWORK POLICY ENFORCEMENT USING NATURAL LANGUAGE PROCESSING AND PROGRAMMABLE NETWORKS

    Get PDF
    Computer networks are becoming more complex and challenging to operate, manage, and protect. As a result, Network policies that define how network operators should manage the network are becoming more complex and nuanced. Unfortunately, network policies are often an undervalued part of network design, leaving network operators to guess at the intent of policies that are written and fill in the gaps where policies don’t exist. Organizations typically designate Policy Committees to write down the network policies in the policy documents using high-level natural languages. The policy documents describe both the acceptable and unacceptable uses of the network. Network operators then take the responsibility of enforcing the policies and verifying whether the enforcement achieves expected requirements. Network operators often encounter gaps and ambiguous statements when translating network policies into specific network configurations. An ill-structured network policy document may prevent network operators from implementing the true intent of the policies, and thus leads to incorrect enforcement. It is thus important to know the quality of the written network policies and to remove any ambiguity that may confuse the people who are responsible for reading and implementing them. Moreover, there is a need not only to prevent policy violations from occurring but also to check for any policy violations that may have occurred (i.e., the prevention mechanisms failed in some way), since unwanted packets or network traffic, were somehow allowed to enter the network. In addition, the emergence of programmable networks provides flexible network control. Enforcing network routing policies in an environment that contains both the traditional networks and programmable networks also becomes a challenge. This dissertation presents a set of methods designed to improve network policy enforcement. We begin by describing the design and implementation of a new Network Policy Analyzer (NPA), which analyzes the written quality of network policies and outputs a quality report that can be given to Policy Committees to improve their policies. Suggestions on how to write good network policies are also provided. We also present Network Policy Conversation Engine (NPCE), a chatbot for network operators to ask questions in natural languages that check whether there is any policy violation in the network. NPCE takes advantage of recent advances in Natural Language Processing (NLP) and modern database solutions to convert natural language questions into the corresponding database queries. Next, we discuss our work towards understanding how Internet ASes connect with each other at third-party locations such as IXPs and their business relationships. Such a graph is needed to write routing policies and to calculate available routes in the future. Lastly, we present how we successfully manage network policies in a hybrid network composed of both SDN and legacy devices, making network services available over the entire network

    The Mask: Masking the effects of Edge Nodes being unavailable

    Get PDF
    The arctic tundra is observed to collect data to be used for climate research. Data can be collected by cyber-physical computers with sensors. However, the arctic tundra has a limited availability of energy. Consequently, the nodes rely on batteries and sleep most of the time to increase the battery-limited operational lifetime. In addition, only a few nodes can expect to be in reach of a back-haul wireless data network. Consequently, the nodes have on-node wireless local area networks to reach nearby neighbor nodes. To increase the availability for remote clients to the data collected by the nodes, a set of shadow nodes are used. These are always on, and always have access to a back-haul network. Data from an edge node on the arctic tundra propagates to the shadow nodes either directly over a back-haul network, or via a neighbor node with a back-haul network. The purpose is to make the data produced by an edge node available to a client even when the edge node sleeps or no network access is available. A statistical analysis is done to characterize the prototype’s behavior under a set of edge-node behaviors. To validate the statistical analysis a prototype system is developed and used in a set of performance-measuring experiments. Experiments are done with 10 to 1,000,000 nodes, different probabilities of nodes being awake, and different probabilities of the back-haul network being available. Edge and shadow nodes are emulated as Go functions and executed on a high-performance computer with thousands of cores. Different wireless networks are emulated albeit in a simplified way. A run-time simulation system is developed to control the prototype and conduct the experiments. The results for the prototype show that if the single synchronization chance is low or the desired time to get the latest data should be minimized, an additional data delivery path should be considered on the edge node’s side. Synchronization via the right neighbor principle adds an extra communication channel which increases the data availability level by 50%-100%, but the resource demand grows by 30% per unit. The time required to get the latest data from edge nodes decreases by a factor of 1.75. The results for the simulation show that the cumulative network throughput of approximately ≈ 2100 MB/s and the Generated Data Amount ≈ 25000 MB/s can be achieved at the cost of ≈ 80 KB RAM per emulated node. The results show that the statistical analysis and the results from the prototype as used by the simulation system match, but the statistical expectation considers a limited range of factors. Statistically derived values can be used as the input for the simulation, where they would be adjusted to get a more comprehensive result. The conclusions are that the Mask provides instant access to data storage for edge nodes. The Mask is fronted to clients which become able to retrieve the data asynchronously, even when edge nodes are offline

    Ensuring compliance with data privacy and usage policies in online services

    Get PDF
    Online services collect and process a variety of sensitive personal data that is subject to complex privacy and usage policies. Complying with the policies is critical, often legally binding for service providers, but it is challenging as applications are prone to many disclosure threats. We present two compliance systems, Qapla and Pacer, that ensure efficient policy compliance in the face of direct and side-channel disclosures, respectively. Qapla prevents direct disclosures in database-backed applications (e.g., personnel management systems), which are subject to complex access control, data linking, and aggregation policies. Conventional methods inline policy checks with application code. Qapla instead specifies policies directly on the database and enforces them in a database adapter, thus separating compliance from the application code. Pacer prevents network side-channel leaks in cloud applications. A tenant’s secrets may leak via its network traffic shape, which can be observed at shared network links (e.g., network cards, switches). Pacer implements a cloaked tunnel abstraction, which hides secret-dependent variation in tenant’s traffic shape, but allows variations based on non-secret information, enabling secure and efficient use of network resources in the cloud. Both systems require modest development efforts, and incur moderate performance overheads, thus demonstrating their usability.Onlinedienste sammeln und verarbeiten eine Vielzahl sensibler persönlicher Daten, die komplexen Datenschutzrichtlinien unterliegen. Die Einhaltung dieser Richtlinien ist häufig rechtlich bindend für Dienstanbieter und gleichzeitig eine Herausforderung, da Fehler in Anwendungsprogrammen zu einer unabsichtlichen Offenlegung führen können. Wir präsentieren zwei Compliance-Systeme, Qapla und Pacer, die Richtlinien effizient einhalten und gegen direkte und indirekte Offenlegungen durch Seitenkanäle schützen. Qapla verhindert direkte Offenlegungen in datenbankgestützten Anwendungen. Herkömmliche Methoden binden Richtlinienprüfungen in Anwendungscode ein. Stattdessen gibt Qapla Richtlinien direkt in der Datenbank an und setzt sie in einem Datenbankadapter durch. Die Konformität ist somit vom Anwendungscode getrennt. Pacer verhindert Netzwerkseitenkanaloffenlegungen in Cloud-Anwendungen. Geheimnisse eines Nutzers können über die Form des Netzwerkverkehr offengelegt werden, die bei gemeinsam genutzten Netzwerkelementen (z. B. Netzwerkkarten, Switches) beobachtet werden kann. Pacer implementiert eine Tunnelabstraktion, die Geheimnisse im Netzwerkverkehr des Nutzers verbirgt, jedoch Variationen basier- end auf nicht geheimen Informationen zulässt und eine sichere und effiziente Nutzung der Netzwerkressourcen in der Cloud ermöglicht. Beide Systeme erfordern geringen Entwicklungsaufwand und verursachen einen moderaten Leistungsaufwand, wodurch ihre Nützlichkeit demonstriert wird

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    Future of networking is the future of Big Data, The

    Get PDF
    2019 Summer.Includes bibliographical references.Scientific domains such as Climate Science, High Energy Particle Physics (HEP), Genomics, Biology, and many others are increasingly moving towards data-oriented workflows where each of these communities generates, stores and uses massive datasets that reach into terabytes and petabytes, and projected soon to reach exabytes. These communities are also increasingly moving towards a global collaborative model where scientists routinely exchange a significant amount of data. The sheer volume of data and associated complexities associated with maintaining, transferring, and using them, continue to push the limits of the current technologies in multiple dimensions - storage, analysis, networking, and security. This thesis tackles the networking aspect of big-data science. Networking is the glue that binds all the components of modern scientific workflows, and these communities are becoming increasingly dependent on high-speed, highly reliable networks. The network, as the common layer across big-science communities, provides an ideal place for implementing common services. Big-science applications also need to work closely with the network to ensure optimal usage of resources, intelligent routing of requests, and data. Finally, as more communities move towards data-intensive, connected workflows - adopting a service model where the network provides some of the common services reduces not only application complexity but also the necessity of duplicate implementations. Named Data Networking (NDN) is a new network architecture whose service model aligns better with the needs of these data-oriented applications. NDN's name based paradigm makes it easier to provide intelligent features at the network layer rather than at the application layer. This thesis shows that NDN can push several standard features to the network. This work is the first attempt to apply NDN in the context of large scientific data; in the process, this thesis touches upon scientific data naming, name discovery, real-world deployment of NDN for scientific data, feasibility studies, and the designs of in-network protocols for big-data science

    Bolvedere: a scalable network flow threat analysis system

    Get PDF
    Since the advent of the Internet, and its public availability in the late 90’s, there have been significant advancements to network technologies and thus a significant increase of the bandwidth available to network users, both human and automated. Although this growth is of great value to network users, it has led to an increase in malicious network-based activities and it is theorized that, as more services become available on the Internet, the volume of such activities will continue to grow. Because of this, there is a need to monitor, comprehend, discern, understand and (where needed) respond to events on networks worldwide. Although this line of thought is simple in its reasoning, undertaking such a task is no small feat. Full packet analysis is a method of network surveillance that seeks out specific characteristics within network traffic that may tell of malicious activity or anomalies in regular network usage. It is carried out within firewalls and implemented through packet classification. In the context of the networks that make up the Internet, this form of packet analysis has become infeasible, as the volume of traffic introduced onto these networks every day is so large that there are simply not enough processing resources to perform such a task on every packet in real time. One could combat this problem by performing post-incident forensics; archiving packets and processing them later. However, as one cannot process all incoming packets, the archive will eventually run out of space. Full packet analysis is also hindered by the fact that some existing, commonly-used solutions are designed around a single host and single thread of execution, an outdated approach that is far slower than necessary on current computing technology. This research explores the conceptual design and implementation of a scalable network traffic analysis system named Bolvedere. Analysis performed by Bolvedere simply asks whether the existence of a connection, coupled with its associated metadata, is enough to conclude something meaningful about that connection. This idea draws away from the traditional processing of every single byte in every single packet monitored on a network link (Deep Packet Inspection) through the concept of working with connection flows. Bolvedere performs its work by leveraging the NetFlow version 9 and IPFIX protocols, but is not limited to these. It is implemented using a modular approach that allows for either complete execution of the system on a single host or the horizontal scaling out of subsystems on multiple hosts. The use of multiple hosts is achieved through the implementation of Zero Message Queue (ZMQ). This allows for Bolvedre to horizontally scale out, which results in an increase in processing resources and thus an increase in analysis throughput. This is due to ease of interprocess communications provided by ZMQ. Many underlying mechanisms in Bolvedere have been automated. This is intended to make the system more userfriendly, as the user need only tell Bolvedere what information they wish to analyse, and the system will then rebuild itself in order to achieve this required task. Bolvedere has also been hardware-accelerated through the use of Field-Programmable Gate Array (FPGA) technologies, which more than doubled the total throughput of the system
    corecore