8,221 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
Technology for Low Resolution Space Based RSO Detection and Characterisation
Space Situational Awareness (SSA) refers to all activities to detect, identify and track objects in Earth orbit. SSA is critical to all current and future space activities and protect space assets by providing access control, conjunction warnings, and monitoring status of active satellites. Currently SSA methods and infrastructure are not sufficient to account for the proliferations of space debris. In response to the need for better SSA there has been many different areas of research looking to improve SSA most of the requiring dedicated ground or space-based infrastructure. In this thesis, a novel approach for the characterisation of RSO’s (Resident Space Objects) from passive low-resolution space-based sensors is presented with all the background work performed to enable this novel method. Low resolution space-based sensors are common on current satellites, with many of these sensors being in space using them passively to detect RSO’s can greatly augment SSA with out expensive infrastructure or long lead times. One of the largest hurtles to overcome with research in the area has to do with the lack of publicly available labelled data to test and confirm results with. To overcome this hurtle a simulation software, ORBITALS, was created. To verify and validate the ORBITALS simulator it was compared with the Fast Auroral Imager images, which is one of the only publicly available low-resolution space-based images found with auxiliary data. During the development of the ORBITALS simulator it was found that the generation of these simulated images are computationally intensive when propagating the entire space catalog. To overcome this an upgrade of the currently used propagation method, Specialised General Perturbation Method 4th order (SGP4), was performed to allow the algorithm to run in parallel reducing the computational time required to propagate entire catalogs of RSO’s. From the results it was found that the standard facet model with a particle swarm optimisation performed the best estimating an RSO’s attitude with a 0.66 degree RMSE accuracy across a sequence, and ~1% MAPE accuracy for the optical properties. This accomplished this thesis goal of demonstrating the feasibility of low-resolution passive RSO characterisation from space-based platforms in a simulated environment
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
A Discourse-Analytic Approach to the Study of Information Disorders: How Online Communities Legitimate Social Bonds When Communing Around Misinformation and Disinformation
Information disorders have become prevalent concerns in current social media research. This thesis is focused on the interpersonal dimension of information disorders, in other words, how we can trace, through linguistic and multimodal analysis, the social bonding that occurs when online communities commune around misinformation and disinformation, and how these social bonds are legitimated to enhance perceived credibility. Social bonding in this thesis refers to a social semiotic perspective on the shared values that communities use to construe alignment with others. False information can spread when groups have a shared vested interest, and so information disorders need to be elucidated through an investigation of sociality and bonding, rather than via logical points alone. The term ‘information disorder’ encompasses the spectrum of false information ranging from misinformation (misleading content) to disinformation (deliberately false content), and it is within this landscape of information disorders that this thesis emerges. Two key forms of social semiotic discourse analysis were applied to a dataset of YouTube videos (n=30) and comments (n=1500): affiliation (analysis of social bonding) and legitimation (analysis of resources used to construct legitimacy). The dataset constituted two contrasting case studies. The first was non-politically motivated misinformation in the form of an internet hoax leveraging moral panic about children using technologies. The second was politically motivated conspiracy theories relating to the Notre Dame Cathedral fire. The key findings of this thesis include the multimodal congruence of affiliation and legitimation across YouTube videos, the emergence of technological authority as a key legitimation strategy in online discourse, and the notion of textual personae investigating the complex array of identities that engage with information disorders in comment threads. Additionally, six macro-categories were identified regarding communicative strategies derived from comment threads: scepticism, criticism, education and expertise, nationalism, hate speech, and storytelling and conspiracy. This shows not only how information disorders are spread, but also how they can be countered. The method outlined in this thesis can be applied to future interdisciplinary analyses of political propaganda and current global concerns to develop linguistic and multimodal profiles of various communities engaging with information disorders
2023-2024 Boise State University Undergraduate Catalog
This catalog is primarily for and directed at students. However, it serves many audiences, such as high school counselors, academic advisors, and the public. In this catalog you will find an overview of Boise State University and information on admission, registration, grades, tuition and fees, financial aid, housing, student services, and other important policies and procedures. However, most of this catalog is devoted to describing the various programs and courses offered at Boise State
Design and Advanced Model Predictive Control of Wide Bandgap Based Power Converters
The field of power electronics (PE) is experiencing a revolution by harnessing the superior technical characteristics of wide-band gap (WBG) materials, namely Silicone Carbide (SiC) and Gallium Nitride (GaN). Semiconductor devices devised using WBG materials enable high temperature operation at reduced footprint, offer higher blocking voltages, and operate at much higher switching frequencies compared to conventional Silicon (Si) based counterpart. These characteristics are highly desirable as they allow converter designs for challenging applications such as more-electric-aircraft (MEA), electric vehicle (EV) power train, and the like. This dissertation presents designs of a WBG based power converters for a 1 MW, 1 MHz ultra-fast offboard EV charger, and 250 kW integrated modular motor drive (IMMD) for a MEA application. The goal of these designs is to demonstrate the superior power density and efficiency that are achievable by leveraging the power of SiC and GaN semiconductors. Ultra-fast EV charging is expected to alleviate the challenge of range anxiety , which is currently hindering the mass adoption of EVs in automotive market. The power converter design presented in the dissertation utilizes SiC MOSFETs embedded in a topology that is a modification of the conventional three-level (3L) active neutral-point clamped (ANPC) converter. A novel phase-shifted modulation scheme presented alongside the design allows converter operation at switching frequency of 1 MHz, thereby miniaturizing the grid-side filter to enhance the power density. IMMDs combine the power electronic drive and the electric machine into a single unit, and thus is an efficient solution to realize the electrification of aircraft. The IMMD design presented in the dissertation uses GaN devices embedded in a stacked modular full-bridge converter topology to individually drive each of the motor coils. Various issues and solutions, pertaining to paralleling of GaN devices to meet the high current requirements are also addressed in the thesis. Experimental prototypes of the SiC ultra-fast EV charger and GaN IMMD were built, and the results confirm the efficacy of the proposed designs. Model predictive control (MPC) is a nonlinear control technique that has been widely investigated for various power electronic applications in the past decade. MPC exploits the discrete nature of power converters to make control decisions using a cost function. The controller offers various advantages over, e.g., linear PI controllers in terms of fast dynamic response, identical performance at a reduced switching frequency, and ease of applicability to MIMO applications. This dissertation also investigates MPC for key power electronic applications, such as, grid-tied VSC with an LCL filter and multilevel VSI with an LC filter. By implementing high performance MPC controllers on WBG based power converters, it is possible to formulate designs capable of fast dynamic tracking, high power operation at reduced THD, and increased power density
In-situ dust mass distribution measurements from Giotto encounter with comet P/Halley.
On the night of the 13/14th of March 1986 the European Space Agency's Giotto spacecraft passed within 600km of the nucleus of comet P/Halley. On board the spacecraft was an impressive array of experiments designed to study all aspects of the cometary coma and provide high resolution images of the nucleus. The principle experiment designed to measure the coma dust mass distribution was called the Dust Impact Detection System (DIDSY). The design operation and performance of this experiment are considered and details are provided of post-mission recalibration, including the development of a software simulation to aid in the interpretation of the returned data. The faulty operation of one of the DIDSY sensors resulted in the use of ) the front end channels of the Particulate Impact Analyser Experiment (PIA) to provide information on particles of mass 10*19kg < m < 10-15kg. The techniques used to extract the required information and calibrate the sensor using encounter data and the inter-relationship between different operating modes is described. The analysis of impacts which caused multiple detection by two or more of the DIDSY sensors is described and the results from these multi-sensor events used to extend the measured mass range up to 10_5kg. A mass distribution representative of the coma passed through by Giotto was constructed and this is combined with a simple model to obtain the dust production rate and dust to gas ratio
Some models are useful, but how do we know which ones? Towards a unified Bayesian model taxonomy
Probabilistic (Bayesian) modeling has experienced a surge of applications in
almost all quantitative sciences and industrial areas. This development is
driven by a combination of several factors, including better probabilistic
estimation algorithms, flexible software, increased computing power, and a
growing awareness of the benefits of probabilistic learning. However, a
principled Bayesian model building workflow is far from complete and many
challenges remain. To aid future research and applications of a principled
Bayesian workflow, we ask and provide answers for what we perceive as two
fundamental questions of Bayesian modeling, namely (a) "What actually is a
Bayesian model?" and (b) "What makes a good Bayesian model?". As an answer to
the first question, we propose the PAD model taxonomy that defines four basic
kinds of Bayesian models, each representing some combination of the assumed
joint distribution of all (known or unknown) variables (P), a posterior
approximator (A), and training data (D). As an answer to the second question,
we propose ten utility dimensions according to which we can evaluate Bayesian
models holistically, namely, (1) causal consistency, (2) parameter
recoverability, (3) predictive performance, (4) fairness, (5) structural
faithfulness, (6) parsimony, (7) interpretability, (8) convergence, (9)
estimation speed, and (10) robustness. Further, we propose two example utility
decision trees that describe hierarchies and trade-offs between utilities
depending on the inferential goals that drive model building and testing
- …