110 research outputs found

    Tracking system study

    Get PDF
    A digital computer program was generated which mathematically describes an optimal estimator-controller technique as applied to the control of antenna tracking systems used by NASA. Simulation studies utilizing this program were conducted using the IBM 360/91 computer. The basic ideas of applying optimal estimator-controller techniques to antenna tracking systems are discussed. A survey of existing tracking methods is given along with shortcomings and inherent errors. It is explained how these errors can be considerably reduced if optimal estimation and control are used. The modified programs generated in this project are described and the simulation results are summarized. The new algorithms for direct synthesis and stabilization of the systems including nonlinearities, are presented

    Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications

    Full text link
    Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications.Comment: 28 pages, 14 figure

    The design of digital-adaptive controllers for VTOL aircraft

    Get PDF
    Design procedures for VTOL automatic control systems have been developed and are presented. Using linear-optimal estimation and control techniques as a starting point, digital-adaptive control laws have been designed for the VALT Research Aircraft, a tandem-rotor helicopter which is equipped for fully automatic flight in terminal area operations. These control laws are designed to interface with velocity-command and attitude-command guidance logic, which could be used in short-haul VTOL operations. Developments reported here include new algorithms for designing non-zero-set-point digital regulators, design procedures for rate-limited systems, and algorithms for dynamic control trim setting

    Advances on Uncertainty Quantification Techniques for Dynamical Systems: Theory and Modelling

    Full text link
    [ES] La cuantificación de la incertidumbre está compuesta por una serie de métodos y técnicas computacionales cuyo objetivo principal es describir la aleatoriedad presente en problemas de diversa índole. Estos métodos son de utilidad en la modelización de procesos biológicos, físicos, naturales o sociales, ya que en ellos aparecen ciertos aspectos que no pueden ser determinados de manera exacta. Por ejemplo, la tasa de contagio de una enfermedad epidemiológica o el factor de crecimiento de un volumen tumoral dependen de factores genéticos, ambientales o conductuales. Estos no siempre pueden definirse en su totalidad y por tanto conllevan una aleatoriedad intrínseca que afecta en el desarrollo final. El objetivo principal de esta tesis es extender técnicas para cuantificar la incertidumbre en dos áreas de las matemáticas: el cálculo de ecuaciones diferenciales fraccionarias y la modelización matemática. Las derivadas de orden fraccionario permiten modelizar comportamientos que las derivadas clásicas no pueden, como por ejemplo los efectos de memoria o la viscoelasticidad en algunos materiales. En esta tesis, desde un punto de vista teórico, se extenderá el cálculo fraccionario a un ambiente de incertidumbre, concretamente en el sentido de la media cuadrática. Se presentarán problemas de valores iniciales fraccionarios aleatorios. El cálculo de la solución, la obtención de las aproximaciones de la media y varianza de la solución y la aproximación de la primera función de densidad de probabilidad de la solución son conceptos que se abordarán en los próximos capítulos. Sin embargo, no siempre es sencillo obtener la solución exacta de un problema de valores iniciales fraccionario aleatorio. Por ello en esta tesis también se dedicará un capítulo para describir un procedimiento numérico que aproxime su solución. Por otro lado, desde un punto de vista más aplicado, se desarrollan técnicas computacionales para cuantificar la incertidumbre en modelos matemáticos. Combinando estas técnicas junto con modelos matemáticos apropiados, se estudiarán problemas de dinámica biológica. En primer lugar, se determinará la cantidad de portadores de meningococo en España con un modelo de competencia de Lotka-Volterra fraccionario aleatorio. A continuación, el volumen de un tumor mamario se modelizará mediante un modelo logístico con incertidumbre. Finalmente ayudándonos de un modelo matemático que describe el nivel de glucosa en sangre de un paciente diabético, se pretende dar una recomendación de carbohidratos e insulina que se debe de ingerir para que el nivel de glucosa del paciente esté dentro de una banda de confianza saludable. Es importante subrayar que para poder realizar estos estudios se requieren datos reales, los cuales pueden estar alterados debido a los errores de medición o proceso que se han cometido para obtenerlos. Por este motivo, modelizar correctamente el problema junto con la incertidumbre en los datos es de vital importancia.[CA] La quantificació de la incertesa està composada per una sèrie de mètodes i tècniques computacionals, l'objectiu principal de les quals és descriure l'aleatorietat present en problemes de diversa índole. Aquests mètodes són d'utilitat en la modelització de processos biològics, físics, naturals o socials, ja que en ells apareixen certs aspectes que no poden ser determinats de manera exacta. Per exemple, la taxa de contagi d'una malaltia epidemiològica o el factor de creixement d'un volum tumoral depenen de factors genètics, ambientals o conductuals. Aquests no sempre poden definir-se íntegrament i per tant, comporten una aleatorietat intrínseca que afecta en el desenvolupament final. L'objectiu principal d'aquesta tesi doctoral és estendre tècniques per a quantificar la incertesa en dues àrees de les matemàtiques: el càlcul d'equacions diferencials fraccionàries i la modelització matemàtica. Les derivades d'ordre fraccionari permeten modelitzar comportaments que les derivades clàssiques no poden, com per exemple, els efectes de memòria o la viscoelasticitat en alguns materials. En aquesta tesi, des d'un punt de vista teòric, s'estendrà el càlcul fraccionari a un ambient d'incertesa, concretament en el sentit de la mitjana quadràtica. Es presentaran problemes de valors inicials fraccionaris aleatoris. El càlcul de la solució, l'obtenció de les aproximacions de la mitjana i, la variància de la solució i l'aproximació de la primera funció de densitat de probabilitat de la solució són conceptes que s'abordaran en els pròxims capítols. No obstant això, no sempre és senzill obtindre la solució exacta d'un problema de valors inicials fraccionari aleatori. Per això en aquesta tesi també es dedicarà un capítol per a descriure un procediment numèric que aproxime la seua solució. D'altra banda, des d'un punt de vista més aplicat, es desenvolupen tècniques computacionals per a quantificar la incertesa en models matemàtics. Combinant aquestes tècniques juntament amb models matemàtics apropiats, s'estudiaran problemes de dinàmica biològica. En primer lloc, es determinarà la quantitat de portadors de meningococ a Espanya amb un model de competència de Lotka-Volterra fraccionari aleatori. A continuació, el volum d'un tumor mamari es modelitzará mitjançant un model logístic amb incertesa. Finalment ajudant-nos d'un model matemàtic que descriu el nivell de glucosa en sang d'un pacient diabètic, es pretén donar una recomanació de carbohidrats i insulina que s'ha d'ingerir perquè el nivell de glucosa del pacient estiga dins d'una banda de confiança saludable. És important subratllar que per a poder realitzar aquests estudis es requereixen dades reals, els quals poden estar alterats a causa dels errors de mesurament o per la forma en que s'han obtés. Per aquest motiu, modelitzar correctament el problema juntament amb la incertesa en les dades és de vital importància.[EN] Uncertainty quantification collects different methods and computational techniques aimed at describing the randomness in real phenomena. These methods are useful in the modelling of different processes as biological, physical, natural or social, since they present some aspects that can not be determined exactly. For example, the contagious rate of a epidemiological disease or the growth factor of a tumour volume depend on genetic, environmental or behavioural factors. They may not always be fully described and therefore involve uncertainties that affects on the final result. The main objective of this PhD thesis is to extend techniques to quantify the uncertainty in two mathematical areas: fractional calculus and mathematical modelling. Fractional derivatives allow us to model some behaviours that classical derivatives cannot, such as memory effects or the viscoelasticity of some materials. In this PhD thesis, from a theoretical point of view, fractional calculus is extended into the random framework, concretely in the mean square sense. Initial value problems will be studied. The calculus of the analytic solution, approximations for the mean and for the variance and the computation of the first probability density function are concepts we deal with them thought the following chapters. Nevertheless, it is not always possible to obtain the analytic solution of an initial value problem. Therefore, in this dissertation a chapter is addressed to describe a numerical procedure to approximate the solution for an initial value problem. On the other hand, from a modelling point of view, computational techniques to quantify the uncertainty in mathematical models are developed. Merging these techniques with appropriate mathematical models, problems of biological dynamics are studied. Firstly, the carriers of meningococcus in Spain are determined using a competition Lotka-Volterra random fractional model. Then, the volume of breast tumours is modelled by a random logistic model. Finally, taking advantage of a mathematical model which describes the glucose level of a diabetic patient, a recommendation of insulin shots and carbohydrate intakes is proposed to a patient in order to maintain her/his glucose level in a healthy confidence range. An important observation is that to carry out these studies real data is required and they may include uncertainties contained in the measurements on the process to perform the corresponding study. This it is the reason why it is crucial to properly model the problem taking also into account the randomness of the data.Burgos Simón, C. (2021). Advances on Uncertainty Quantification Techniques for Dynamical Systems: Theory and Modelling [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/166442TESI

    Beyond Transmitting Bits: Context, Semantics, and Task-Oriented Communications

    Get PDF
    Communication systems to date primarily aim at reliably communicating bit sequences. Such an approach provides efficient engineering designs that are agnostic to the meanings of the messages or to the goal that the message exchange aims to achieve. Next generation systems, however, can be potentially enriched by folding message semantics and goals of communication into their design. Further, these systems can be made cognizant of the context in which communication exchange takes place, thereby providing avenues for novel design insights. This tutorial summarizes the efforts to date, starting from its early adaptations, semantic-aware and task-oriented communications, covering the foundations, algorithms and potential implementations. The focus is on approaches that utilize information theory to provide the foundations, as well as the significant role of learning in semantics and task-aware communications

    Proceedings of the 1st Virtual Control Conference VCC 2010

    Get PDF

    An introduction to structural optimization problems

    Get PDF
    The object of the research described in this thesis is to examine the possibilities of developing analytical and computational procedures for a class of structural optimization problems in the presence of behaviour and side constraints. These are essentially optimal control problems based on the maximum principle of Pontryagin and dynamic programming formalism of Bellman. They are characterised by inequality constraints on the state and control variables giving rise to systems of highly complex differential equations which present formidable difficulties both in the construction of the appropriate boundary conditions and subsequent development of solution procedures for these boundary value problems. Therefore an alternative approach is used whereby the problem is discretised leading to a non-linear programming approximation. The associated non-linear programs are characterised by non-analytic "black box" type representations for the behaviour constraints. The solutions are based on a "steepest descent–alternate step" mode of travel in design space. [Continues.

    Federated Machine Learning in Edge Computing

    Get PDF
    Machine Learning (ML) is transforming the way that computers are used to solve problems in computer vision, natural language processing, scientific modelling, and much more. The rising number of devices connected to the Internet generate huge quantities of data that can be used for ML purposes. Traditionally, organisations require user data to be uploaded to a single location (i.e., cloud datacentre) for centralised ML. However, public concerns regarding data-privacy are growing, and in some domains such as healthcare, there exist strict laws governing the access of data. The computational power and connectivity of devices at the network edge is also increasing: edge computing is a paradigm designed to move computation from the cloud to the edge to reduce latency and traffic. Federated Learning (FL) is a new and swiftly-developing field that has huge potential for privacy-preserving ML. In FL, edge devices collaboratively train a model without users sharing their personal data with any other party. However, there exist multiple challenges for designing useful FL algorithms, including: the heterogeneity of data across participating clients; the low computing power, intermittent connectivity and unreliability of clients at the network edge compared to the datacentre; and the difficulty of limiting information leakage whilst still training high-performance models. This thesis proposes new methods for improving the process of FL in edge computing and hence making it more practical for real-world deployments. First, a novel approach is designed that accelerates the convergence of the FL model through adaptive optimisation, reducing the time taken to train a model, whilst lowering the total quantity of information uploaded from edge clients to the coordinating server through two new compression strategies. Next, a Multi-Task FL framework is proposed that allows participating clients to train unique models that are tailored to their own heterogeneous datasets whilst still benefiting from FL, improving model convergence speed and generalisation performance across clients. Then, the principle of decreasing the total work that clients perform during the FL process is explored. A theoretical analysis (and subsequent experimental evaluation) suggests that this approach can reduce the time taken to reach a desired training error whilst lowering the total computational cost of FL and improving communication-efficiency. Lastly, an algorithm is designed that applies adaptive optimisation to FL in a novel way, through the use of a statistically-biased optimiser whose values are kept fixed on clients. This algorithm can leverage the convergence guarantees of centralised algorithms, with the addition of FL-related error-terms. Furthermore, it shows excellent performance on benchmark FL datasets whilst possessing lower computation and upload costs compared to competing adaptive-FL algorithms
    corecore