6,677 research outputs found
Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive Manufacturing Model with Operator Learning
One predominant challenge in additive manufacturing (AM) is to achieve
specific material properties by manipulating manufacturing process parameters
during the runtime. Such manipulation tends to increase the computational load
imposed on existing simulation tools employed in AM. The goal of the present
work is to construct a fast and accurate reduced-order model (ROM) for an AM
model developed within the Multiphysics Object-Oriented Simulation Environment
(MOOSE) framework, ultimately reducing the time/cost of AM control and
optimization processes. Our adoption of the operator learning (OL) approach
enabled us to learn a family of differential equations produced by altering
process variables in the laser's Gaussian point heat source. More specifically,
we used the Fourier neural operator (FNO) and deep operator network (DeepONet)
to develop ROMs for time-dependent responses. Furthermore, we benchmarked the
performance of these OL methods against a conventional deep neural network
(DNN)-based ROM. Ultimately, we found that OL methods offer comparable
performance and, in terms of accuracy and generalizability, even outperform DNN
at predicting scalar model responses. The DNN-based ROM afforded the fastest
training time. Furthermore, all the ROMs were faster than the original MOOSE
model yet still provided accurate predictions. FNO had a smaller mean
prediction error than DeepONet, with a larger variance for time-dependent
responses. Unlike DNN, both FNO and DeepONet were able to simulate time series
data without the need for dimensionality reduction techniques. The present work
can help facilitate the AM optimization process by enabling faster execution of
simulation tools while still preserving evaluation accuracy.Comment: 28 pages, 18 figures, 4 table
Introduction to Facial Micro Expressions Analysis Using Color and Depth Images: A Matlab Coding Approach (Second Edition, 2023)
The book attempts to introduce a gentle introduction to the field of Facial
Micro Expressions Recognition (FMER) using Color and Depth images, with the aid
of MATLAB programming environment. FMER is a subset of image processing and it
is a multidisciplinary topic to analysis. So, it requires familiarity with
other topics of Artifactual Intelligence (AI) such as machine learning, digital
image processing, psychology and more. So, it is a great opportunity to write a
book which covers all of these topics for beginner to professional readers in
the field of AI and even without having background of AI. Our goal is to
provide a standalone introduction in the field of MFER analysis in the form of
theorical descriptions for readers with no background in image processing with
reproducible Matlab practical examples. Also, we describe any basic definitions
for FMER analysis and MATLAB library which is used in the text, that helps
final reader to apply the experiments in the real-world applications. We
believe that this book is suitable for students, researchers, and professionals
alike, who need to develop practical skills, along with a basic understanding
of the field. We expect that, after reading this book, the reader feels
comfortable with different key stages such as color and depth image processing,
color and depth image representation, classification, machine learning, facial
micro-expressions recognition, feature extraction and dimensionality reduction.
The book attempts to introduce a gentle introduction to the field of Facial
Micro Expressions Recognition (FMER) using Color and Depth images, with the aid
of MATLAB programming environment.Comment: This is the second edition of the boo
Gaussian Control Barrier Functions : A Gaussian Process based Approach to Safety for Robots
In recent years, the need for safety of autonomous and intelligent robots has increased. Today, as robots are being increasingly deployed in closer proximity to humans, there is an exigency for safety since human lives may be at risk, e.g., self-driving vehicles or surgical robots. The objective of this thesis is to present a safety framework for dynamical systems that leverages tools from control theory and machine learning. More formally, the thesis presents a data-driven framework for designing safety function candidates which ensure properties of forward invariance. The potential benefits of the results presented in this thesis are expected to help applications such as safe exploration, collision avoidance problems, manipulation tasks, and planning, to name some.
We utilize Gaussian processes (GP) to place a prior on the desired safety function candidate, which is to be utilized as a control barrier function (CBF). The resultant formulation is called Gaussian CBFs and they reside in a reproducing kernel Hilbert space. A key concept behind Gaussian CBFs is the incorporation of both safety belief as well as safety uncertainty, which former barrier function formulations did not consider. This is achieved by using robust posterior estimates from a GP where the posterior mean and variance serve as surrogates for the safety belief and uncertainty respectively. We synthesize safe controllers by framing a convex optimization problem where the kernel-based representation of GPs allows computing the derivatives in closed-form analytically.
Finally, in addition to the theoretical and algorithmic frameworks in this thesis, we rigorously test our methods in hardware on a quadrotor platform. The platform used is a Crazyflie 2.1 which is a versatile palm-sized quadrotor. We provide our insights and detailed discussions on the hardware implementations which will be useful for large-scale deployment of the techniques presented in this dissertation.Ph.D
Predictive Demand Response Modeling for Logistic Systems Innovation and Optimization
In the ever-increasing dynamics of global business markets, logistic systems must optimize the usage of all possible sources to continually innovate. Scenario-based demand prediction plays an important role in the effective economic operations and planning of logistics. However, many uncertainties and demand variability, which are associated with innovative changes, complicate demand forecasting and expose system operators to the risk of failing to meet demand. This dissertation presents new approaches to predictively explore how customer preferences will change and consequently demand would respond to the new setup of services caused by an innovative transformation of the logistic layout. The critical challenge is that the responses from customers in particular and demand in general to the innovative changes and corresponding adjustments are uncertain and unknown in practice, and there is no historical data to learn from and directly support the predictive model.
In this dissertation, we are dealing with three different predictive demand response modeling approaches, jointly shaping a new methodological pathway. Chapter 1 provides a novel approach for predictive modeling probabilistic customer behavior over new service offers which are much faster than ever done before, based on the case of a large Chinese parcel-delivery service provider. Chapter 2 introduces an approach for predicting scenario-based erection-site demand schedules under uncertainty of disruptive events in construction projects whose logistics transformed from traditional to modular style, based on the case of a USA-based innovative leader in modular building production. For such a leader to advance in its logistics design innovations and associated capacity adjustments, and also to enhance its capability for taking more market share, it is crucial to estimate potential future demand for modular construction and corresponding probable projects in terms of their potential location, size, and characteristics. For this purpose, Chapter 3 introduces a methodological approach for estimating scenario-based future demand for modular construction projects to be implemented over the US metropolitan statistical areas.Ph.D
Fuzzy Natural Logic in IFSA-EUSFLAT 2021
The present book contains five papers accepted and published in the Special Issue, “Fuzzy Natural Logic in IFSA-EUSFLAT 2021”, of the journal Mathematics (MDPI). These papers are extended versions of the contributions presented in the conference “The 19th World Congress of the International Fuzzy Systems Association and the 12th Conference of the European Society for Fuzzy Logic and Technology jointly with the AGOP, IJCRS, and FQAS conferences”, which took place in Bratislava (Slovakia) from September 19 to September 24, 2021. Fuzzy Natural Logic (FNL) is a system of mathematical fuzzy logic theories that enables us to model natural language terms and rules while accounting for their inherent vagueness and allows us to reason and argue using the tools developed in them. FNL includes, among others, the theory of evaluative linguistic expressions (e.g., small, very large, etc.), the theory of fuzzy and intermediate quantifiers (e.g., most, few, many, etc.), and the theory of fuzzy/linguistic IF–THEN rules and logical inference. The papers in this Special Issue use the various aspects and concepts of FNL mentioned above and apply them to a wide range of problems both theoretically and practically oriented. This book will be of interest for researchers working in the areas of fuzzy logic, applied linguistics, generalized quantifiers, and their applications
Economic and Social Consequences of the COVID-19 Pandemic in Energy Sector
The purpose of the Special Issue was to collect the results of research and experience on the consequences of the COVID-19 pandemic for the energy sector and the energy market, broadly understood, that were visible after a year. In particular, the impact of COVID-19 on the energy sector in the EU, including Poland, and the US was examined. The topics concerned various issues, e.g., the situation of energy companies, including those listed on the stock exchange, mining companies, and those dealing with renewable energy. The topics related to the development of electromobility, managerial competences, energy expenditure of local government units, sustainable development of energy, and energy poverty during a pandemic were also discussed
Recommended from our members
Rare-Event Estimation and Calibration for Large-Scale Stochastic Simulation Models
Stochastic simulation has been widely applied in many domains. More recently, however, the rapid surge of sophisticated problems such as safety evaluation of intelligent systems has posed various challenges to conventional statistical methods. Motivated by these challenges, in this thesis, we develop novel methodologies with theoretical guarantees and numerical applications to tackle them from different perspectives.
In particular, our works can be categorized into two areas: (1) rare-event estimation (Chapters 2 to 5) where we develop approaches to estimating the probabilities of rare events via simulation; (2) model calibration (Chapters 6 and 7) where we aim at calibrating the simulation model so that it is close to reality.
In Chapter 2, we study rare-event simulation for a class of problems where the target hitting sets of interest are defined via modern machine learning tools such as neural networks and random forests. We investigate an importance sampling scheme that integrates the dominating point machinery in large deviations and sequential mixed integer programming to locate the underlying dominating points. We provide efficiency guarantees and numerical demonstration of our approach.
In Chapter 3, we propose a new efficiency criterion for importance sampling, which we call probabilistic efficiency. Conventionally, an estimator is regarded as efficient if its relative error is sufficiently controlled. It is widely known that when a rare-event set contains multiple "important regions" encoded by the dominating points, importance sampling needs to account for all of them via mixing to achieve efficiency. We argue that the traditional analysis recipe could suffer from intrinsic looseness by using relative error as an efficiency criterion. Thus, we propose the new efficiency notion to tighten this gap. In particular, we show that under the standard Gartner-Ellis large deviations regime, an importance sampling that uses only the most significant dominating points is sufficient to attain this efficiency notion.
In Chapter 4, we consider the estimation of rare-event probabilities using sample proportions output by crude Monte Carlo. Due to the recent surge of sophisticated rare-event problems, efficiency-guaranteed variance reduction may face implementation challenges, which motivate one to look at naive estimators. In this chapter we construct confidence intervals for the target probability using this naive estimator from various techniques, and then analyze their validity as well as tightness respectively quantified by the coverage probability and relative half-width.
In Chapter 5, we propose the use of extreme value analysis, in particular the peak-over-threshold method which is popularly employed for extremal estimation of real datasets, in the simulation setting. More specifically, we view crude Monte Carlo samples as data to fit on a generalized Pareto distribution. We test this idea on several numerical examples. The results show that in the absence of efficient variance reduction schemes, it appears to offer potential benefits to enhance crude Monte Carlo estimates.
In Chapter 6, we investigate a framework to develop calibration schemes in parametric settings, which satisfies rigorous frequentist statistical guarantees via a basic notion that we call eligibility set designed to bypass non-identifiability via a set-based estimation. We investigate a feature extraction-then-aggregation approach to construct these sets that target at multivariate outputs. We demonstrate our methodology on several numerical examples, including an application to calibration of a limit order book market simulator.
In Chapter 7, we study a methodology to tackle the NASA Langley Uncertainty Quantification Challenge, a model calibration problem under both aleatory and epistemic uncertainties. Our methodology is based on an integration of distributionally robust optimization and importance sampling. The main computation machinery in this integrated methodology amounts to solving sampled linear programs. We present theoretical statistical guarantees of our approach via connections to nonparametric hypothesis testing, and numerical performances including parameter calibration and downstream decision and risk evaluation tasks
2023-2024 Boise State University Undergraduate Catalog
This catalog is primarily for and directed at students. However, it serves many audiences, such as high school counselors, academic advisors, and the public. In this catalog you will find an overview of Boise State University and information on admission, registration, grades, tuition and fees, financial aid, housing, student services, and other important policies and procedures. However, most of this catalog is devoted to describing the various programs and courses offered at Boise State
A Deep Learning Approach to Analyzing Continuous-Time Systems
Scientists often use observational time series data to study complex natural
processes, but regression analyses often assume simplistic dynamics. Recent
advances in deep learning have yielded startling improvements to the
performance of models of complex processes, but deep learning is generally not
used for scientific analysis. Here we show that deep learning can be used to
analyze complex processes, providing flexible function approximation while
preserving interpretability. Our approach relaxes standard simplifying
assumptions (e.g., linearity, stationarity, and homoscedasticity) that are
implausible for many natural systems and may critically affect the
interpretation of data. We evaluate our model on incremental human language
processing, a domain with complex continuous dynamics. We demonstrate
substantial improvements on behavioral and neuroimaging data, and we show that
our model enables discovery of novel patterns in exploratory analyses, controls
for diverse confounds in confirmatory analyses, and opens up research questions
that are otherwise hard to study.Comment: Main article: 12 pages, 1 table, 3 figures; Supplementary
Information: 54 pages, 6 tables, 30 figure
- …