240 research outputs found

    Tехнічні засоби діагностування та контролю бортових систем інформаційного обміну на літаку

    Get PDF
    Робота публікується згідно наказу ректора від 27.05.2021 р. №311/од "Про розміщення кваліфікаційних робіт вищої освіти в репозиторії НАУ". Керівник дипломної роботи: доцент кафедри авіоніки, Слободян Олександр ПетровичТехнічний прогрес в авіаційній та будь-якій іншій галузі тісно пов'язаний з автоматизацією технологічних процесів. Сьогодні Автоматизація технологічних процесів використовується для підвищення характеристик надійності, довговічності, екологічності, ресурсозбереження і, найголовніше, економічності і простоти експлуатації. Завдяки швидкому розвитку комп'ютерних технологій і мікропроцесорів у нас є можливість використовувати більш досконалі і складні методи моніторингу та управління системами авіаційної промисловості і будь-якими іншими. Мікропроцесорні та електронні обчислювальні пристрої, з'єднані обчислювальними і керуючими мережами з використанням загальних баз даних, мають стандарти, що дозволяють модифікувати і інтегрувати нові пристрої, що, в свою чергу, дозволяє інтегрувати і вдосконалювати виробничі процеси і управляти ними. Проектування системи розподіленої інтегрованої модульної авіоніки (DIMA) з використанням розподіленої інтегрованої технології, змішаного планування критичних завдань, резервний планування в режимі реального часу і механізму зв'язку, який запускається за часом, значно підвищує надійність, безпеку і продуктивність інтегрованої електронної системи в режимі реального часу. DIMA являє собою тенденцію розвитку майбутніх систем авіоніки. У цій статті вивчаються і обговорюються архітектурні характеристики DIMA. Потім він детально вивчає та аналізує розвиток ключових технологій в системі DIMA. Нарешті, в ньому розглядається тенденція розвитку технології DIMA

    Engineering a Low-Cost Remote Sensing Capability for Deep-Space Applications

    Full text link
    Systems engineering (SE) has been a useful tool for providing objective processes to breaking down complex technical problems to simpler tasks, while concurrently generating metrics to provide assurance that the solution is fit-for-purpose. Tailored forms of SE have also been used by cubesat mission designers to assist in reducing risk by providing iterative feedback and key artifacts to provide managers with the evidence to adjust resources and tasking for success. Cubesat-sized spacecraft are being planned, built and in some cases, flown to provide a lower-cost entry point for deep-space exploration. This is particularly important for agencies and countries with lower space exploration budgets, where specific mission objectives can be used to develop tailored payloads within tighter constraints, while also returning useful scientific results or engineering data. In this work, a tailored SE tradespace approach was used to help determine how a 6 unit (6U) cubesat could be built from commercial-off-the-shelf (COTS)-based components and undertake remote sensing missions near Mars or near-Earth Asteroids. The primary purpose of these missions is to carry a hyperspectral sensor sensitive to 600-800nm wavelengths (hereafter defined as “red-edge”), that will investigate mineralogy characteristics commonly associated with oxidizing and hydrating environments in red-edge. Minerals of this type remain of high interest for indicators of present or past habitability for life, or active geologic processes. Implications of operating in a deep-space environment were considered as part of engineering constraints of the design, including potential reduction of available solar energy, changes in thermal environment and background radiation, and vastly increased communications distances. The engineering tradespace analysis identified realistic COTS options that could satisfy mission objectives for the 6U cubesat bus while also accommodating a reasonable degree of risk. The exception was the communication subsystem, in which case suitable capability was restricted to one particular option. This analysis was used to support an additional trade investigation into the type of sensors that would be most suitable for building the red-edge hyperspectral payload. This was in part constrained by ensuring not only that readily available COTS sensors were used, but that affordability, particularly during a geopolitical environment that was affecting component supply surety and access to manufacturing facilities, was optimized. It was found that a number of sensor options were available for designing a useful instrument, although the rapid development and life-of-type issues with COTS sensors restricted the ability to obtain useful metrics on their performance in the space environment. Additional engineering testing was conducted by constructing hyperspectral sensors using sensors popular in science, technology, engineering and mathematics (STEM) contexts. Engineering and performance metrics of the payload containing the sensors was conducted; and performance of these sensors in relevant analogous environments. A selection of materials exhibiting spectral phenomenology in the red-edge portion of the spectrum was used to produce metrics on the performance of the sensors. It was found that low-cost cameras were able to distinguish between most minerals, although they required a wider spectral range to do so. Additionally, while Raspberry Pi cameras have been popular with scientific applications, a low-cost camera without a Bayer filter markedly improved spectral sensitivity. Consideration for space-environment testing was also trialed in additional experiments using high-altitude balloons to reach the near-space environment. The sensor payloads experienced conditions approximating the surface of Mars, and results were compared with Landsat 7, a heritage Earth sensing satellite, using a popular vegetation index. The selected Raspberry Pi cameras were able to provide useful results from near-space that could be compared with space imagery. Further testing incorporated comparative analysis of custom-built sensors using readily available Raspberry Pi and astronomy cameras, and results from Mastcam and Mastcam/z instruments currently on the surface of Mars. Two sensor designs were trialed in field settings possessing Mars-analogue materials, and a subset of these materials were analysed using a laboratory-grade spectro-radiometer. Results showed the Raspberry Pi multispectral camera would be best suited for broad-scale indications of mineralogy that could be targeted by the pushbroom sensor. This sensor was found to possess a narrower spectral range than the Mastcam and Mastcam/z but was sensitive to a greater number of bands within this range. The pushbroom sensor returned data on spectral phenomenology associated with attributes of Minerals of the type found on Mars. The actual performance of the payload in appropriate conditions was important to provide critical information used to risk reduce future designs. Additionally, the successful outcomes of the trials reduced risk for their application in a deep space environment. The SE and practical performance testing conducted in this thesis could be developed further to design, build and fly a hyperspectral sensor, sensitive to red-edge wavelengths, on a deep-space cubesat mission. Such a mission could be flown at reasonable cost yet return useful scientific and engineering data

    Energy-based control approaches in human-robot collaborative disassembly

    Get PDF

    Verification of RoboChart Models with Neural Network Components

    Get PDF
    Current software engineering frameworks for robotics treat artificial neural networks (ANNs) components as black boxes, and existing white-box techniques consider either component-level properties, or properties involving a specific case study. A method to establish properties that may depend on all components in such a system is, as yet, undefined. Our work consists of defining such a method. First, we developed a component whose behaviour is defined by an ANN and acts as a robotic controller. Considering our application to robotics, we focus on pre-trained ANNs used for control. We define our component in the context of RoboChart, where we define modelling notation involving a meta-model and well-formedness conditions, and a process-algebraic semantics. To further support our framework, we defined an implementation of these semantics in Java and CSPM, to enable validation and discretised verification. Given these components, we then developed an approach to verify software systems involving our ANN components. This approach involves replacing existing memoryless, cyclic, controller components with ANN components, and proving that the new system does not deviate in behaviour by more than a constant ε from the original system. Moreover, we describe a strategy for automating these proofs based on Isabelle and Marabou, combining ANN-specific verification tools with general verification tools. We demonstrate our framework using a case study involving a Segway robot where we replace a PID controller with an ANN component. Our contributions can be summarised as follows: we have generated a framework that enables the modelling, validation, and verification of robotic software involving neural network components. Finally, this work represents progress towards establishing the safety and reliability of autonomous robotics

    Naval Postgraduate School Academic Catalog - February 2023

    Get PDF

    Laws of Timed State Machines

    Get PDF
    State machines are widely used in industry and academia to capture behavioural models of control. They are included in popular notations, such as UML and its variants, and used (sometimes informally) to describe computational artefacts. In this paper, we present laws for state machines that we prove sound with respect to a process algebraic semantics for refinement, and complete, in that they are sufficient to reduce an arbitrary model to a normal form that isolates basic (action and control) elements. We consider two variants of UML-like state machines, both enriched with facilities to deal with time budgets, timeouts and deadlines over triggers and actions. In the first variant, machines are self-contained components, declaring all the variables, events and operations that they require or define. In contrast, in the second variant, machines are open, like in UML for instance. Laws for open state machines do not depend on a specific context of variables, events and operations, and normalization uses a novel operator for open-machine (de)composition. Our laws can be used in behaviour-preservation transformation techniques. Their applications are automated by a model-transformation engine

    D7.5 FIRST consolidated project results

    Get PDF
    The FIRST project commenced in January 2017 and concluded in December 2022, including a 24-month suspension period due to the COVID-19 pandemic. Throughout the project, we successfully delivered seven technical reports, conducted three workshops on Key Enabling Technologies for Digital Factories in conjunction with CAiSE (in 2019, 2020, and 2022), produced a number of PhD theses, and published over 56 papers (and numbers of summitted journal papers). The purpose of this deliverable is to provide an updated account of the findings from our previous deliverables and publications. It involves compiling the original deliverables with necessary revisions to accurately reflect the final scientific outcomes of the project

    A Framework for the Verification and Validation of Artificial Intelligence Machine Learning Systems

    Get PDF
    An effective verification and validation (V&V) process framework for the white-box and black-box testing of artificial intelligence (AI) machine learning (ML) systems is not readily available. This research uses grounded theory to develop a framework that leads to the most effective and informative white-box and black-box methods for the V&V of AI ML systems. Verification of the system ensures that the system adheres to the requirements and specifications developed and given by the major stakeholders, while validation confirms that the system properly performs with representative users in the intended environment and does not perform in an unexpected manner. Beginning with definitions, descriptions, and examples of ML processes and systems, the research results identify a clear and general process to effectively test these systems. The developed framework ensures the most productive and accurate testing results. Formerly, and occasionally still, the system definition and requirements exist in scattered documents that make it difficult to integrate, trace, and test through V&V. Modern system engineers along with system developers and stakeholders collaborate to produce a full system model using model-based systems engineering (MBSE). MBSE employs a Unified Modeling Language (UML) or System Modeling Language (SysML) representation of the system and its requirements that readily passes from each stakeholder for system information and additional input. The comprehensive and detailed MBSE model allows for direct traceability to the system requirements. xxiv To thoroughly test a ML system, one performs either white-box or black-box testing or both. Black-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is unknown to the test engineer. Testers and analysts are simply looking at performance of the system given input and output. White-box testing is a testing method in which the internal model structure, design, and implementation of the system under test is known to the test engineer. When possible, test engineers and analysts perform both black-box and white-box testing. However, sometimes testers lack authorization to access the internal structure of the system. The researcher captures this decision in the ML framework. No two ML systems are exactly alike and therefore, the testing of each system must be custom to some degree. Even though there is customization, an effective process exists. This research includes some specialized methods, based on grounded theory, to use in the testing of the internal structure and performance. Through the study and organization of proven methods, this research develops an effective ML V&V framework. Systems engineers and analysts are able to simply apply the framework for various white-box and black-box V&V testing circumstances
    corecore