9,420 research outputs found
UMSL Bulletin 2023-2024
The 2023-2024 Bulletin and Course Catalog for the University of Missouri St. Louis.https://irl.umsl.edu/bulletin/1088/thumbnail.jp
A Preliminary Study of the Effect of the Chirped Rotating Wall on a Positron Cloud
The density of the positron cloud is a crucial parameter in many applications ofaccumulated positrons. Previous work has shown that adjusting the frequency ofthe rotating wall potential following positron accumulation can be used to controlthe density of positron clouds. In this work, positron clouds were studied afterbeing compressed using a linear rotating wall frequency sweep under a selection ofrotating wall drive amplitudes and cooling gas pressures following an initial staticfrequency compression. This was performed for SF6, CF4, and briefly for CO. Theeffect of changing the cooling gas appears congruent to that shown by the staticfrequency case. The results are in qualitative agreement with previous work byDeller et al., and compare briefly but favourably to a simplistic numerical model
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
Pediatric and Adolescent Nephrology Facing the Future: Diagnostic Advances and Prognostic Biomarkers in Everyday Practice
The Special Issue entitled âPediatric and adolescent nephrology facing the future: diagnostic advances and prognostic biomarkers in everyday practiceâ contains articles written in the era when COVID-19 had not yet been a major clinical problem in children. Now that we know its multifaceted clinical course, complications concerning the kidneys, and childhood-specific post-COVID pediatric inflammatory multisystem syndrome (PIMS), the value of diagnostic and prognostic biomarkers in the pediatric area should be appreciated, and their importance ought to increase
Contested environmental futures: rankings, forecasts and indicators as sociotechnical endeavours
In a world where numbers and science are often taken as the voice of truth and reason, Quantitative Devices (QDs) represent the epitome of policy driven by facts rather than hunches. Despite the scholarly interest in understanding the role of quantification in policy, the actual production of rankings, forecasts, indexes and other QDs has, to a great extent, been left unattended. While appendixes and technical notebooks offer an explanation of how these devices are produced, they exclude aspects of their making that are arbitrarily considered "mundane." It is in the everyday performances at research centres that the micropolitics of knowledge production, imaginaries, and frustrations merge. These are vital dimensions to understand the potential, limitations and ethical consequences of QDs.
Using two participant observations as the starting point, this thesis offers a comprehensive critical analysis of the processes through which university-based research centres create QDs that represent the world. It addresses how researchers conceive quantitative data. It pays attention to the discourses of hope and expectation embedded in the devices. Finally, it considers the ethics of creating devices that cannot be replicated independently of their place of production.
Two QDs were analysed: the Violence Early Warning System (ViEWS) and the Environmental Performance Index (EPI). At Uppsala University, researchers created ViEWS to forecast the probability of drought-driven conflicts within the next 100 years. The EPI, produced at the Yale Centre for Environmental Law and Policy, ranks the performance of countries' environmental policies. This thesis challenges existing claims within Science and Technology Studies and the Sociology of Quantification that QDs co-produce knowledge within their realms. I argue that these devices act as vehicles for sociotechnical infrastructures to be consolidated with little debate among policymakers, given their understanding as scientific and objective tools. Moreover, for an indicator to be incorporated within a QD, it needs to be deemed as relevant for those making the devices but also valuable enough to have been previously quantified by data providers. Even more, existing sociotechnical inequalities, power relations and epistemic injustices could impede disadvantaged communities' (e.g., in the Global South) ability to challenge metrics originated in centres in the Global North. This thesis, therefore, demonstrates how the future QDs propose is unilateral and does not acknowledge the myriad possibilities that might arise from a diversity of worldviews. In other words, they cast a future designed to fit under the current status quo.
In sum, through two QDs focused on environmental-related, this thesis launches an inquiry into the elements that make up the imaginaries they propose following the everyday life of their producers. To achieve this, I discuss two core elements. First, the role of tacit knowledge and sociotechnical inequalities in reinforcing power relations between those with the means to quantify and those who might only accommodate proposed futures. Second, the dynamics between research centres and data providers in relation to what is quantified. By scrutinising mundanity, this work is a step forward in understanding the construction of sociotechnical imaginaries and infrastructures
Evaluating footwear âin the wildâ: Examining wrap and lace trail shoe closures during trail running
Trail running participation has grown over the last two decades. As a result, there have been an increasing number of studies examining the sport. Despite these increases, there is a lack of understanding regarding the effects of footwear on trail running biomechanics in ecologically valid conditions. The purpose of our study was to evaluate how a Wrap vs. Lace closure (on the same shoe) impacts running biomechanics on a trail. Thirty subjects ran a trail loop in each shoe while wearing a global positioning system (GPS) watch, heart rate monitor, inertial measurement units (IMUs), and plantar pressure insoles. The Wrap closure reduced peak foot eversion velocity (measured via IMU), which has been associated with fit. The Wrap closure also increased heel contact area, which is also associated with fit. This increase may be associated with the subjective preference for the Wrap. Lastly, runners had a small but significant increase in running speed in the Wrap shoe with no differences in heart rate nor subjective exertion. In total, the Wrap closure fit better than the Lace closure on a variety of terrain. This study demonstrates the feasibility of detecting meaningful biomechanical differences between footwear features in the wild using statistical tools and study design. Evaluating footwear in ecologically valid environments often creates additional variance in the data. This variance should not be treated as noise; instead, it is critical to capture this additional variance and challenges of ecologically valid terrain if we hope to use biomechanics to impact the development of new products
Deliberation, Democracy, and Mechanisms for Cooperation
This thesis explores group decision-making and mechanisms to encourage cooperation through three experimental studies.
Study one uses a public goods game (PGG) with informal and formal sanction mechanisms to understand how team decision-making differs from individual decision-making in a democratic institutional setting. Teams consistently outperform individuals when sanctioning schemes are available, by selecting higher sanction rates when choosing the formal scheme and pro-socially targeting punishment toward low-cooperators when using the informal scheme. This improved decision-making appears to be a result of deliberation and has implications for using team decision-making to overcome moral hazards.
Building on this, study two examines team behaviour in a real effort experiment to understand the impact of democratic decision-making. Specifically, in one treatment teams may vote on whether to implement a policy that reduces the returns from free-riding within their group, while in the other treatment, this policy is randomly implemented. Teams exhibit significantly higher productivity when they are able to democratically decide whether to implement the policy, regardless of the vote outcome. While teams in these treatments also increase their time free-riding, the higher productivity compensates for this and so it does not harm overall production. As in the first chapter, this study highlights the benefits of autonomous team-decision making in improving cooperation.
Study three explores how a group may encourage cooperation to prevent a more costly problem in a two-stage PGG. Subjects complete real effort tasks that either reward them directly or improve the payoff schedule in the following stage, forming a second-order social dilemma. Free-riding does not dominate the pre-stage nor does cooperation decline as strongly as observed in other PGG, demonstrating how leveraging fewer resources to overcome related social dilemmas can make cooperation easier. Further, providing a simple cost- and ramification-free feedback mechanism considerably increases the level of cooperation observed
1-D broadside-radiating leaky-wave antenna based on a numerically synthesized impedance surface
A newly-developed deterministic numerical technique for the automated design of metasurface antennas is applied here for the first time to the design of a 1-D printed Leaky-Wave Antenna (LWA) for broadside radiation. The surface impedance synthesis process does not require any a priori knowledge on the impedance pattern, and starts from a mask constraint on the desired far-field and practical bounds on the unit cell impedance values. The designed reactance surface for broadside radiation exhibits a non conventional patterning; this highlights the merit of using an automated design process for a design well known to be challenging for analytical methods. The antenna is physically implemented with an array of metal strips with varying gap widths and simulation results show very good agreement with the predicted performance
- âŠ