8,291 research outputs found

    UMSL Bulletin 2023-2024

    Get PDF
    The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Beam scanning by liquid-crystal biasing in a modified SIW structure

    Get PDF
    A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium

    A survey on reconfigurable intelligent surfaces: wireless communication perspective

    Get PDF
    Using reconfigurable intelligent surfaces (RISs) to improve the coverage and the data rate of future wireless networks is a viable option. These surfaces are constituted of a significant number of passive and nearly passive components that interact with incident signals in a smart way, such as by reflecting them, to increase the wireless system's performance as a result of which the notion of a smart radio environment comes to fruition. In this survey, a study review of RIS-assisted wireless communication is supplied starting with the principles of RIS which include the hardware architecture, the control mechanisms, and the discussions of previously held views about the channel model and pathloss; then the performance analysis considering different performance parameters, analytical approaches and metrics are presented to describe the RIS-assisted wireless network performance improvements. Despite its enormous promise, RIS confronts new hurdles in integrating into wireless networks efficiently due to its passive nature. Consequently, the channel estimation for, both full and nearly passive RIS and the RIS deployments are compared under various wireless communication models and for single and multi-users. Lastly, the challenges and potential future study areas for the RIS aided wireless communication systems are proposed

    Turbulence closure with small, local neural networks: Forced two-dimensional and β\beta-plane flows

    Full text link
    We parameterize sub-grid scale (SGS) fluxes in sinusoidally forced two-dimensional turbulence on the β\beta-plane at high Reynolds numbers (Re\sim25000) using simple 2-layer Convolutional Neural Networks (CNN) having only O(1000)parameters, two orders of magnitude smaller than recent studies employing deeper CNNs with 8-10 layers; we obtain stable, accurate, and long-term online or a posteriori solutions at 16X downscaling factors. Our methodology significantly improves training efficiency and speed of online Large Eddy Simulations (LES) runs, while offering insights into the physics of closure in such turbulent flows. Our approach benefits from extensive hyperparameter searching in learning rate and weight decay coefficient space, as well as the use of cyclical learning rate annealing, which leads to more robust and accurate online solutions compared to fixed learning rates. Our CNNs use either the coarse velocity or the vorticity and strain fields as inputs, and output the two components of the deviatoric stress tensor. We minimize a loss between the SGS vorticity flux divergence (computed from the high-resolution solver) and that obtained from the CNN-modeled deviatoric stress tensor, without requiring energy or enstrophy preserving constraints. The success of shallow CNNs in accurately parameterizing this class of turbulent flows implies that the SGS stresses have a weak non-local dependence on coarse fields; it also aligns with our physical conception that small-scales are locally controlled by larger scales such as vortices and their strained filaments. Furthermore, 2-layer CNN-parameterizations are more likely to be interpretable and generalizable because of their intrinsic low dimensionality.Comment: 27 pages, 13 figure

    Efficient resilience analysis and decision-making for complex engineering systems

    Get PDF
    Modern societies around the world are increasingly dependent on the smooth functionality of progressively more complex systems, such as infrastructure systems, digital systems like the internet, and sophisticated machinery. They form the cornerstones of our technologically advanced world and their efficiency is directly related to our well-being and the progress of society. However, these important systems are constantly exposed to a wide range of threats of natural, technological, and anthropogenic origin. The emergence of global crises such as the COVID-19 pandemic and the ongoing threat of climate change have starkly illustrated the vulnerability of these widely ramified and interdependent systems, as well as the impossibility of predicting threats entirely. The pandemic, with its widespread and unexpected impacts, demonstrated how an external shock can bring even the most advanced systems to a standstill, while the ongoing climate change continues to produce unprecedented risks to system stability and performance. These global crises underscore the need for systems that can not only withstand disruptions, but also, recover from them efficiently and rapidly. The concept of resilience and related developments encompass these requirements: analyzing, balancing, and optimizing the reliability, robustness, redundancy, adaptability, and recoverability of systems -- from both technical and economic perspectives. This cumulative dissertation, therefore, focuses on developing comprehensive and efficient tools for resilience-based analysis and decision-making of complex engineering systems. The newly developed resilience decision-making procedure is at the core of these developments. It is based on an adapted systemic risk measure, a time-dependent probabilistic resilience metric, as well as a grid search algorithm, and represents a significant innovation as it enables decision-makers to identify an optimal balance between different types of resilience-enhancing measures, taking into account monetary aspects. Increasingly, system components have significant inherent complexity, requiring them to be modeled as systems themselves. Thus, this leads to systems-of-systems with a high degree of complexity. To address this challenge, a novel methodology is derived by extending the previously introduced resilience framework to multidimensional use cases and synergistically merging it with an established concept from reliability theory, the survival signature. The new approach combines the advantages of both original components: a direct comparison of different resilience-enhancing measures from a multidimensional search space leading to an optimal trade-off in terms of system resilience, and a significant reduction in computational effort due to the separation property of the survival signature. It enables that once a subsystem structure has been computed -- a typically computational expensive process -- any characterization of the probabilistic failure behavior of components can be validated without having to recompute the structure. In reality, measurements, expert knowledge, and other sources of information are loaded with multiple uncertainties. For this purpose, an efficient method based on the combination of survival signature, fuzzy probability theory, and non-intrusive stochastic simulation (NISS) is proposed. This results in an efficient approach to quantify the reliability of complex systems, taking into account the entire uncertainty spectrum. The new approach, which synergizes the advantageous properties of its original components, achieves a significant decrease in computational effort due to the separation property of the survival signature. In addition, it attains a dramatic reduction in sample size due to the adapted NISS method: only a single stochastic simulation is required to account for uncertainties. The novel methodology not only represents an innovation in the field of reliability analysis, but can also be integrated into the resilience framework. For a resilience analysis of existing systems, the consideration of continuous component functionality is essential. This is addressed in a further novel development. By introducing the continuous survival function and the concept of the Diagonal Approximated Signature as a corresponding surrogate model, the existing resilience framework can be usefully extended without compromising its fundamental advantages. In the context of the regeneration of complex capital goods, a comprehensive analytical framework is presented to demonstrate the transferability and applicability of all developed methods to complex systems of any type. The framework integrates the previously developed resilience, reliability, and uncertainty analysis methods. It provides decision-makers with the basis for identifying resilient regeneration paths in two ways: first, in terms of regeneration paths with inherent resilience, and second, regeneration paths that lead to maximum system resilience, taking into account technical and monetary factors affecting the complex capital good under analysis. In summary, this dissertation offers innovative contributions to efficient resilience analysis and decision-making for complex engineering systems. It presents universally applicable methods and frameworks that are flexible enough to consider system types and performance measures of any kind. This is demonstrated in numerous case studies ranging from arbitrary flow networks, functional models of axial compressors to substructured infrastructure systems with several thousand individual components.Moderne Gesellschaften sind weltweit zunehmend von der reibungslosen Funktionalität immer komplexer werdender Systeme, wie beispielsweise Infrastruktursysteme, digitale Systeme wie das Internet oder hochentwickelten Maschinen, abhängig. Sie bilden die Eckpfeiler unserer technologisch fortgeschrittenen Welt, und ihre Effizienz steht in direktem Zusammenhang mit unserem Wohlbefinden sowie dem Fortschritt der Gesellschaft. Diese wichtigen Systeme sind jedoch einer ständigen und breiten Palette von Bedrohungen natürlichen, technischen und anthropogenen Ursprungs ausgesetzt. Das Auftreten globaler Krisen wie die COVID-19-Pandemie und die anhaltende Bedrohung durch den Klimawandel haben die Anfälligkeit der weit verzweigten und voneinander abhängigen Systeme sowie die Unmöglichkeit einer Gefahrenvorhersage in voller Gänze eindrücklich verdeutlicht. Die Pandemie mit ihren weitreichenden und unerwarteten Auswirkungen hat gezeigt, wie ein externer Schock selbst die fortschrittlichsten Systeme zum Stillstand bringen kann, während der anhaltende Klimawandel immer wieder beispiellose Risiken für die Systemstabilität und -leistung hervorbringt. Diese globalen Krisen unterstreichen den Bedarf an Systemen, die nicht nur Störungen standhalten, sondern sich auch schnell und effizient von ihnen erholen können. Das Konzept der Resilienz und die damit verbundenen Entwicklungen umfassen diese Anforderungen: Analyse, Abwägung und Optimierung der Zuverlässigkeit, Robustheit, Redundanz, Anpassungsfähigkeit und Wiederherstellbarkeit von Systemen -- sowohl aus technischer als auch aus wirtschaftlicher Sicht. In dieser kumulativen Dissertation steht daher die Entwicklung umfassender und effizienter Instrumente für die Resilienz-basierte Analyse und Entscheidungsfindung von komplexen Systemen im Mittelpunkt. Das neu entwickelte Resilienz-Entscheidungsfindungsverfahren steht im Kern dieser Entwicklungen. Es basiert auf einem adaptierten systemischen Risikomaß, einer zeitabhängigen, probabilistischen Resilienzmetrik sowie einem Gittersuchalgorithmus und stellt eine bedeutende Innovation dar, da es Entscheidungsträgern ermöglicht, ein optimales Gleichgewicht zwischen verschiedenen Arten von Resilienz-steigernden Maßnahmen unter Berücksichtigung monetärer Aspekte zu identifizieren. Zunehmend weisen Systemkomponenten eine erhebliche Eigenkomplexität auf, was dazu führt, dass sie selbst als Systeme modelliert werden müssen. Hieraus ergeben sich Systeme aus Systemen mit hoher Komplexität. Um diese Herausforderung zu adressieren, wird eine neue Methodik abgeleitet, indem das zuvor eingeführte Resilienzrahmenwerk auf multidimensionale Anwendungsfälle erweitert und synergetisch mit einem etablierten Konzept aus der Zuverlässigkeitstheorie, der Überlebenssignatur, zusammengeführt wird. Der neue Ansatz kombiniert die Vorteile beider ursprünglichen Komponenten: Einerseits ermöglicht er einen direkten Vergleich verschiedener Resilienz-steigernder Maßnahmen aus einem mehrdimensionalen Suchraum, der zu einem optimalen Kompromiss in Bezug auf die Systemresilienz führt. Andererseits ermöglicht er durch die Separationseigenschaft der Überlebenssignatur eine signifikante Reduktion des Rechenaufwands. Sobald eine Subsystemstruktur berechnet wurde -- ein typischerweise rechenintensiver Prozess -- kann jede Charakterisierung des probabilistischen Ausfallverhaltens von Komponenten validiert werden, ohne dass die Struktur erneut berechnet werden muss. In der Realität sind Messungen, Expertenwissen sowie weitere Informationsquellen mit vielfältigen Unsicherheiten belastet. Hierfür wird eine effiziente Methode vorgeschlagen, die auf der Kombination von Überlebenssignatur, unscharfer Wahrscheinlichkeitstheorie und nicht-intrusiver stochastischer Simulation (NISS) basiert. Dadurch entsteht ein effizienter Ansatz zur Quantifizierung der Zuverlässigkeit komplexer Systeme unter Berücksichtigung des gesamten Unsicherheitsspektrums. Der neue Ansatz, der die vorteilhaften Eigenschaften seiner ursprünglichen Komponenten synergetisch zusammenführt, erreicht eine bedeutende Verringerung des Rechenaufwands aufgrund der Separationseigenschaft der Überlebenssignatur. Er erzielt zudem eine drastische Reduzierung der Stichprobengröße aufgrund der adaptierten NISS-Methode: Es wird nur eine einzige stochastische Simulation benötigt, um Unsicherheiten zu berücksichtigen. Die neue Methodik stellt nicht nur eine Neuerung auf dem Gebiet der Zuverlässigkeitsanalyse dar, sondern kann auch in das Resilienzrahmenwerk integriert werden. Für eine Resilienzanalyse von real existierenden Systemen ist die Berücksichtigung kontinuierlicher Komponentenfunktionalität unerlässlich. Diese wird in einer weiteren Neuentwicklung adressiert. Durch die Einführung der kontinuierlichen Überlebensfunktion und dem Konzept der Diagonal Approximated Signature als entsprechendes Ersatzmodell kann das bestehende Resilienzrahmenwerk sinnvoll erweitert werden, ohne seine grundlegenden Vorteile zu beeinträchtigen. Im Kontext der Regeneration komplexer Investitionsgüter wird ein umfassendes Analyserahmenwerk vorgestellt, um die Übertragbarkeit und Anwendbarkeit aller entwickelten Methoden auf komplexe Systeme jeglicher Art zu demonstrieren. Das Rahmenwerk integriert die zuvor entwickelten Methoden der Resilienz-, Zuverlässigkeits- und Unsicherheitsanalyse. Es bietet Entscheidungsträgern die Basis für die Identifikation resilienter Regenerationspfade in zweierlei Hinsicht: Zum einen im Sinne von Regenerationspfaden mit inhärenter Resilienz und zum anderen Regenerationspfade, die zu einer maximalen Systemresilienz unter Berücksichtigung technischer und monetärer Einflussgrößen des zu analysierenden komplexen Investitionsgutes führen. Zusammenfassend bietet diese Dissertation innovative Beiträge zur effizienten Resilienzanalyse und Entscheidungsfindung für komplexe Ingenieursysteme. Sie präsentiert universell anwendbare Methoden und Rahmenwerke, die flexibel genug sind, um beliebige Systemtypen und Leistungsmaße zu berücksichtigen. Dies wird in zahlreichen Fallstudien von willkürlichen Flussnetzwerken, funktionalen Modellen von Axialkompressoren bis hin zu substrukturierten Infrastruktursystemen mit mehreren tausend Einzelkomponenten demonstriert

    Augmented Behavioral Annotation Tools, with Application to Multimodal Datasets and Models: A Systematic Review

    Get PDF
    Annotation tools are an essential component in the creation of datasets for machine learning purposes. Annotation tools have evolved greatly since the turn of the century, and now commonly include collaborative features to divide labor efficiently, as well as automation employed to amplify human efforts. Recent developments in machine learning models, such as Transformers, allow for training upon very large and sophisticated multimodal datasets and enable generalization across domains of knowledge. These models also herald an increasing emphasis on prompt engineering to provide qualitative fine-tuning upon the model itself, adding a novel emerging layer of direct machine learning annotation. These capabilities enable machine intelligence to recognize, predict, and emulate human behavior with much greater accuracy and nuance, a noted shortfall of which have contributed to algorithmic injustice in previous techniques. However, the scale and complexity of training data required for multimodal models presents engineering challenges. Best practices for conducting annotation for large multimodal models in the most safe and ethical, yet efficient, manner have not been established. This paper presents a systematic literature review of crowd and machine learning augmented behavioral annotation methods to distill practices that may have value in multimodal implementations, cross-correlated across disciplines. Research questions were defined to provide an overview of the evolution of augmented behavioral annotation tools in the past, in relation to the present state of the art. (Contains five figures and four tables)

    Structured machine learning models for robustness against different factors of variability in robot control

    Get PDF
    An important feature of human sensorimotor skill is our ability to learn to reuse them across different environmental contexts, in part due to our understanding of attributes of variability in these environments. This thesis explores how the structure of models used within learning for robot control could similarly help autonomous robots cope with variability, hence achieving skill generalisation. The overarching approach is to develop modular architectures that judiciously combine different forms of inductive bias for learning. In particular, we consider how models and policies should be structured in order to achieve robust behaviour in the face of different factors of variation - in the environment, in objects and in other internal parameters of a policy - with the end goal of more robust, accurate and data-efficient skill acquisition and adaptation. At a high level, variability in skill is determined by variations in constraints presented by the external environment, and in task-specific perturbations that affect the specification of optimal action. A typical example of environmental perturbation would be variation in lighting and illumination, affecting the noise characteristics of perception. An example of task perturbations would be variation in object geometry, mass or friction, and in the specification of costs associated with speed or smoothness of execution. We counteract these factors of variation by exploring three forms of structuring: utilising separate data sets curated according to the relevant factor of variation, building neural network models that incorporate this factorisation into the very structure of the networks, and learning structured loss functions. The thesis is comprised of four projects exploring this theme within robotics planning and prediction tasks. Firstly, in the setting of trajectory prediction in crowded scenes, we explore a modular architecture for learning static and dynamic environmental structure. We show that factorising the prediction problem from the individual representations allows for robust and label efficient forward modelling, and relaxes the need for full model re-training in new environments. This modularity explicitly allows for a more flexible and interpretable adaptation of trajectory prediction models to using pre-trained state of the art models. We show that this results in more efficient motion prediction and allows for performance comparable to the state-of-the-art supervised 2D trajectory prediction. Next, in the domain of contact-rich robotic manipulation, we consider a modular architecture that combines model-free learning from demonstration, in particular dynamic movement primitives (DMP), with modern model-free reinforcement learning (RL), using both on-policy and off-policy approaches. We show that factorising the skill learning problem to skill acquisition and error correction through policy adaptation strategies such as residual learning can help improve the overall performance of policies in the context of contact-rich manipulation. Our empirical evaluation demonstrates how to best do this with DMPs and propose “residual Learning from Demonstration“ (rLfD), a framework that combines DMPs with RL to learn a residual correction policy. Our evaluations, performed both in simulation and on a physical system, suggest that applying residual learning directly in task space and operating on the full pose of the robot can significantly improve the overall performance of DMPs. We show that rLfD offers a gentle to the joints solution that improves the task success and generalisation of DMPs. Last but not least, our study shows that the extracted correction policies can be transferred to different geometries and frictions through few-shot task adaptation. Third, we employ meta learning to learn time-invariant reward functions, wherein both the objectives of a task (i.e., the reward functions) and the policy for performing that task optimally are learnt simultaneously. We propose a novel inverse reinforcement learning (IRL) formulation that allows us to 1) vary the length of execution by learning time-invariant costs, and 2) relax the temporal alignment requirements for learning from demonstration. We apply our method to two different types of cost formulations and evaluate their performance in the context of learning reward functions for simulated placement and peg in hole tasks executed on a 7DoF Kuka IIWA arm. Our results show that our approach enables learning temporally invariant rewards from misaligned demonstration that can also generalise spatially to out of distribution tasks. Finally, we employ our observations to evaluate adversarial robustness in the context of transfer learning from a source trained on CIFAR 100 to a target network trained on CIFAR 10. Specifically, we study the effects of using robust optimisation in the source and target networks. This allows us to identify transfer learning strategies under which adversarial defences are successfully retained, in addition to revealing potential vulnerabilities. We study the extent to which adversarially robust features can preserve their defence properties against black and white-box attacks under three different transfer learning strategies. Our empirical evaluations give insights on how well adversarial robustness under transfer learning can generalise.
    corecore