190 research outputs found

    Remote file access over low-speed lines

    Get PDF
    A link between microcomputer and mainframe can be useful in several ways, even when, as is usually the case, the link is only a normal terminal line. One interesting example is the ‘Integrated application’, which divides a task between microcomputer and mainframe and can offer several benefits; in particular, reducing load on the mainframe and permitting a more advanced user interface than possible on a conventional terminal. Because integrated applications consist of two co-operating programs, they are much more difficult to construct than a single program. It would be much easier to implement integrated applications concerned with the display and/or modification of data in mainframe files if the microcomputer could confine its dealings with the mainframe to a suitable file server. However, file servers do not appear practical for use over slow (compared to disc access speed) terminal lines. It was proposed to alleviate the problems caused by the slow link with extended file operations, which would allow time-consuming file operations such as searching or copying between files to be done in the file server. It was discovered after attempting such a system that extended file operations are not, by themselves, sufficient; but, allied to a record-based file model and asynchronous operations (i.e. file operations that do not suspend the user program until they complete), useful results could be obtained. This thesis describes FLAP, a file server for use over terminal lines which incorporates these ideas, and MMMS, an inter-application transport protocol used by FLAP for communication between the microcomputer file interface and the mainframe server. Two simple FLAP applications are presented, a customer records maintenance program and a screen editor. Details are given of their construction and response time in use at various line speeds

    Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb

    Get PDF
    The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings. However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet. While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity. Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks. Therefore, this thesis addresses the research question: How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation? This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question. To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach. The engineering approach decouples the engineering steps of component definition, integration, and deployment. The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior. Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components. Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most. The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method. A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation. The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies. The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics. The second case study is a smart factory showcase system with more challenging application components and interface technologies. The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine große Auswahl von Drittanbieter-Lösungen ist einer der Schlüsselfaktoren für Software Ecosystems: Nutzer profitieren vom breiten Angebot und schnellen Innovationen, während Drittanbieter über die Plattform erfolgreiche Lösungen anbieten können. Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche für verteilte Steuerungssysteme noch nicht erreicht wurde. Während Drittanbieter-Komponenten möglichst spät -- sogar Laufzeit -- integriert werden müssten, müssen Steuerungssysteme jedoch eine hohe Zuverlässigkeit gegenüber Echtzeitanforderungen aufweisen, was zu Integrationskomplexität führt. Dies zu lösen würde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem für moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfügbarer Frameworks zu schnellerer Innovation führen würde. Daher behandelt diese Dissertation folgende Forschungsfrage: Wie können wir moderne IT-Technologien verwenden und unabhängige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Änderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen? Diese Dissertation beschreibt Herausforderungen und verwandte Ansätze im Detail und zeigt auf, dass existierende Ansätze diese Frage nicht vollständig behandeln. Um diese Lücke zu schließen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz. Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten. Die Laufzeit-Plattform unterstützt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten. Unabhängige Entwicklung und Integration werden durch Konzepte für synchrone Rekonfiguration im Vollbetrieb unterstützt, also durch dynamische Orchestrierung. Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter Qualitätsminderung, wenn überhaupt. Rekonfigurationsplanung wird durch Analysekonzepte unterstützt, einschließlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen Qualitätsminderung mit dem Evolving Dataflow Graph (EDFG). Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben. Zwei Fallstudien untersuchen Machbarkeit und Nützlichkeit der Konzepte. Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben. Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet. Die Konzepte sind den Studien nach machbar und nützlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kürzere Zyklen zu erreichen

    Control System for the Next Generation In-flight Separator Super-FRS applied to New Isotope Search with the FRS

    Get PDF
    The construction of the upcoming FAIR facility entails an upgrade of the existing GSI accelerator facility. One of the upgrades is the integration of the new LSA control system framework, which is licensed and adapted from CERN, in order to provide a unified control environment for all accelerators, rings and transfer lines at both the GSI and FAIR facility. As part of this work, it was possible to incorporate the FRS as a machine-model within LSA by developing and implementing Parameter hierarchies and Makerules to enable streamlined and maximum parallel operations. For this purpose a FRS General Target Hierarchy was implemented to virtually map targets, target ladders, degraders, degrader disks, degrader ladders, detectors and detector ladders as realistically as possible with additional Makerules to facilitate automated online energy-loss calculations, secondary beam production within targets, operator driven magnetic rigidity overwriting and ion-optical target alignment calculations. Additionally slits, pneumatic drives and stepper motors were introduced into the machine-model, as well. Benchmarking proved for the machine-model and LSA to be equivalent to previous control systems by reproducing old experimental settings within an accuracy of 10^-4 and 10^-3 for the magnetic rigidity and current, respectively. Contemporary experiments can be even identically reproduced within the measurement and setting precision. Additional testing with a Ar-40 and U-238 primary beam showed the machine-model's capabilities in correctly transporting primary and secondary beam fragments to the destined experimental station without previous setting calculation via LISE++, proving all functionalities operative. This foundation was used during FAIR Phase-0 experiments at the GSI to produce and using the methods described here it was possible to preliminarily identify up to 21 new isotopes with a relativistic Pb-208 primary beam at 1050 AMeV impinging on a beryllium target of 4 g/cm^2 thickness with a niobium stripper backing of 225 mg/cm^2 thickness to first produce Re-200, -201, -202, W-198, -199, Ta-195, -196, -197, Hf-191, -192, -193, Lu-189, -190, -191, Yb-186, -187, Tm-182, -183, -184, -185 and Er-181

    An Improved Modular Addition Checksum Algorithm

    Full text link
    This paper introduces a checksum algorithm that provides a new point in the performance/complexity/effectiveness checksum tradeoff space. It has better fault detection properties than single-sum and dual-sum modular addition checksums. It is also simpler to compute efficiently than a cyclic redundancy check (CRC) due to exploiting commonly available hardware and programming language support for unsigned integer division. The key idea is to compute a single running sum, but introduce a left shift by the size (in bits) of the modulus before performing the modular reduction after each addition step. This approach provides a Hamming Distance of 3 for longer data word lengths than dual-sum approaches such as the Fletcher checksum. Moreover, it provides this capability using a single running sum that is only twice the size of the final computed check value, while providing fault detection capabilities even better than large-block variants of dual-sum approaches that require larger division operations.Comment: 9 pages, 3 figure

    Large-Block Modular Addition Checksum Algorithms

    Full text link
    Checksum algorithms are widely employed due to their use of a simple algorithm with fast computational speed to provide a basic detection capability for corrupted data. This paper describes the benefits of adding the design parameter of increased data block size for modular addition checksums, combined with an empirical approach to modulus selection. A longer processing block size with the right modulus can provide significantly better fault detection performance with no change in the number of bytes used to store the check value. In particular, a large-block dual-sum approach provides Hamming Distance 3-class fault detection performance for many times the data word length capability of previously studied Fletcher and Adler checksums. Moduli of 253 and 65525 are identified as being particularly effective for general-purpose checksum use.Comment: 21 pages, 15 figure

    Implementation of a flowgraph-based satellite operations software for Earth Observation missions

    Get PDF
    This project aims to develop mission-critical software that facilitates the monitoring and automation of the operations plan between the Operation Center and the CubeSats. This software will assist operators in various tasks, including scheduling satellite passes, controlling one or multiple Ground Stations to follow the satellite, preparing execution plans with contingencies for all the different steps in the communication protocol, and automating these processes. To minimize errors introduced by operators, the software will offer an interactive user interface for configuring message sets and information exchange during contact. It will also allow for the setup of conditional blocks that depend on received data, creating a seamless and error-free feedback loop. The objective is to gradually reduce the operator's workload, to the point of making their interaction unnecessary. This will enable automated communication with the satellite at any time of day. As part of the operations, all uploaded and downloaded data will be stored for posterior processing, with automated processing wherever possible. The software will be developed using the Rust programming language, known for its speed, memory safety, and thread safety. Rust compiler detects a significant amount of common errors at compile-time, this will allow the development of a highly reliable and high-performance application. While the project will initially focus on supporting the 3Cat-4 satellite, it will also create the basis to operate any other satellite in the future, such as the RITA Payload

    Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico

    Get PDF
    Conference proceedings info: ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies Raleigh, HI, United States, March 24-26, 2023 Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático. de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-

    Advances in Information Security and Privacy

    Get PDF
    With the recent pandemic emergency, many people are spending their days in smart working and have increased their use of digital resources for both work and entertainment. The result is that the amount of digital information handled online is dramatically increased, and we can observe a significant increase in the number of attacks, breaches, and hacks. This Special Issue aims to establish the state of the art in protecting information by mitigating information risks. This objective is reached by presenting both surveys on specific topics and original approaches and solutions to specific problems. In total, 16 papers have been published in this Special Issue

    Post-quantum security of hash functions

    Get PDF
    The research covered in this thesis is dedicated to provable post-quantum security of hash functions. Post-quantum security provides security guarantees against quantum attackers. We focus on analyzing the sponge construction, a cryptographic construction used in the standardized hash function SHA3. Our main results are proving a number of quantum security statements. These include standard-model security: collision-resistance and collapsingness, and more idealized notions such as indistinguishability and indifferentiability from a random oracle. All these results concern quantum security of the classical cryptosystems. From a more high-level perspective we find new applications and generalize several important proof techniques in post-quantum cryptography. We use the polynomial method to prove quantum indistinguishability of the sponge construction. We also develop a framework for quantum game-playing proofs, using the recently introduced techniques of compressed random oracles and the One-way-To-Hiding lemma. To establish the usefulness of the new framework we also prove a number of quantum indifferentiability results for other cryptographic constructions. On the way to these results, though, we address an open problem concerning quantum indifferentiability. Namely, we disprove a conjecture that forms the basis of a no-go theorem for a version of quantum indifferentiability
    corecore