385 research outputs found
Dynamics and Modelling of the 2015 Calbuco eruption Volcanic Debris Flows (Chile). From field evidence to a primary lahar model
The Calbuco volcanic eruption of 2015, was characterized by two explosive phases with partialand major column collapses that triggered lahars in many of the flanks of the volcano. Large lahar flows descended to the southern flank where highly fractured ice bodies were emplaced on steep slopes.In this study, we present a chronology of the volcanic flows based on a multi parameterdata set that includes social media, reports of authoritative institutions, instrumental monitoringdata and published research literature on the eruption. Our review established thatlahars in the Amarillo river began during the first phase of the eruption due to the sustained emplacement of pyroclastic flows in its catchment. In contrast, we propose that the lahars in theBlanco – Correntoso river system and the Este river were likely to have been triggered by asudden mechanical collapse of the glacier that triggered mixed avalanches which transitionedinto lahars downstream.Our observations include inundation cross-sections, estimates of flow speeds, and characterization of the morphology, grain sizes, and componentry of deposits.Field measurements are used together with instrumental data for calibrating a dynamic, physics-based model of lahar, Laharflow. We model flows in the Blanco – Correntoso river system and explore the influence of the model parameters on flow predictions in an ensemble of simulations. We develop a calibration that accounts for the substantial epistemic uncertainties in our observations and the model formulation, that seeks to determine plausible ranges for the model parameters, including those representing the lahar source. Our approach highlights the parameters in the model that have a dominant effect on the ability of the model to match observations, indicating where further development and additional observations could improve model predictions. The simulations in our ensemble that provide plausible matches to the observations are combined to produce flow inundation maps
2020 GREAT Day Program
SUNY Geneseo’s Fourteenth Annual GREAT Day.https://knightscholar.geneseo.edu/program-2007/1014/thumbnail.jp
CHARACTERISTICS OF REFRACTIVITY AND SEA STATE IN THE MARINE ATMOSPHERIC SURFACE LAYER AND THEIR INFLUENCE ON X-BAND PROPAGATION
Predictions of environmental conditions within the marine atmospheric surface layer (MASL) are important to X-band radar system performance. Anomalous propagation occurs in conditions of non-standard atmospheric refractivity, driven by the virtually permanent presence of evaporation ducts (ED) in marine environments. Evaporation ducts are commonly characterized by the evaporation duct height (EDH), evaporation duct strength, and the gradients below the EDH, known as the evaporation duct curvature. Refractivity, and subsequent features, are estimated in the MASL primarily using four methods: in-situ measurements, numerical weather and surface layer modeling, boundary layer theory, and inversion methods.
The existing refractivity estimation techniques often assume steady homogeneous conditions, and discrepancies between measured and simulated propagation predictions exist. These discrepancies could be attributed to the exclusion of turbulent fluctuations of the refractive index, exclusion of spatially heterogeneous refractive environments, and inaccurate characterization of the sea surface in propagation simulations. Due to the associated complexity and modeling challenges, unsteady inhomogeneous refractivity and rough sea surfaces are often omitted from simulations.
This dissertation first investigates techniques for steady homogeneous refractivity and characterizes refractivity predictions using EDH and profile curvature, examining their effects on X-band propagation. Observed differences between techniques are explored with respect to prevailing meteorological conditions. Significant characteristics are then utilized in refractivity inversions for mean refractivity based-on point-to-point EM measurements. The inversions are compared to the other previously examined techniques. Differences between refractivity estimation methods are generally observed in relation to EDH, resulting in the largest variations in propagation, where most significant EDH discrepancies occur in stable conditions. Further, discrepancies among the refractivity estimation methods (in-situ, numerical models, theory, and inversion) when conditions are unstable and the mean EDH are similar, could be attributed to the neglect of spatial heterogeneity of EDH and turbulent fluctuations in the refractive index. To address this, a spectral-based turbulent refractive index fluctuation model (TRIF) is applied to emulate refractive index fluctuations. TRIF is verified against in-situ meteorological measurements and integrated with a heterogenous EDH model to estimate a comprehensive propagation environment. Lastly, a global sensitivity analysis is applied to evaluate the leading-order effects and non-linear interactions between the parameters of the comprehensive refractivity model and the sea surface in a parabolic wave equation propagation simulation under different atmospheric stability regimes (stable, neutral, and unstable). In neutral and stable regimes, mean evaporation duct characteristics (EDH and refractive gradients below the EDH) have the greatest impact on propagation, particularly beyond the geometric horizon. In unstable conditions, turbulence also plays a significant role. Regardless of atmospheric stability, forward scattering from the rough sea surface has a substantial effect on propagation predictions, especially within the lowest 10 m of the atmosphere
Elements of Ion Linear Accelerators, Calm in The Resonances, Other_Tales
The main part of this book, Elements of Linear Accelerators, outlines in Part
1 a framework for non-relativistic linear accelerator focusing and accelerating
channel design, simulation, optimization and analysis where space charge is an
important factor. Part 1 is the most important part of the book; grasping the
framework is essential to fully understand and appreciate the elements within
it, and the myriad application details of the following Parts. The treatment
concentrates on all linacs, large or small, intended for high-intensity, very
low beam loss, factory-type application. The Radio-Frequency-Quadrupole (RFQ)
is especially developed as a representative and the most complicated linac form
(from dc to bunched and accelerated beam), extending to practical design of
long, high energy linacs, including space charge resonances and beam halo
formation, and some challenges for future work. Also a practical method is
presented for designing Alternating-Phase- Focused (APF) linacs with long
sequences and high energy gain. Full open-source software is available. The
following part, Calm in the Resonances and Other Tales, contains eyewitness
accounts of nearly 60 years of participation in accelerator technology.
(September 2023) The LINACS codes are released at no cost and, as always,with
fully open-source coding. (p.2 & Ch 19.10)Comment: 652 pages. Some hundreds of figures - all images, there is no data in
the figures. (September 2023) The LINACS codes are released at no cost and,
as always,with fully open-source coding. (p.2 & Ch 19.10
Statistical Learning for Structured Models: Tree Based Methods and Neural Networks
In this thesis, estimation in regression and classification problems which include low dimensional
structures are considered. The underlying question is the following. How well do statistical learn-
ing methods perform for models with low dimensional structures? We approach this question
using various algorithms in various settings. For our first main contribution, we prove optimal
convergence rates in a classification setting using neural networks. While non-optimal rates ex-
isted for this problem, we are the first to prove optimal ones. Secondly, we introduce a new tree
based algorithm we named random planted forest. It adapts particularly well to models which
consist of low dimensional structures. We examine its performance in simulation studies and
include some theoretical backing by proving optimal convergence rates in certain settings for a
modification of the algorithm. Additionally, a generalized version of the algorithm is included,
which can be used in classification settings. In a further contribution, we prove optimal con-
vergence rates for the local linear smooth backfitting algorithm. While such rates have already
been established, we bring a new simpler perspective to the problem which leads to better understanding and easier interpretation. Additionally, given an estimator in a regression setting,
we propose a constraint which leads to a unique decomposition. This decomposition is useful for
visualising and interpreting the estimator, in particular if it consits of low dimenional structures
Modeling, Simulation and Prediction of Vehicle Crashworthiness in Full Frontal Impact
Vehicle crashworthiness assessment is critical to help reduce road accident fatalities and ensure safer vehicles for road users. Techniques to assess crashworthiness include physical tests and mathematical modeling and simulation of crash events, the latter is preferred as mathematical modeling is generally cheaper to perform in comparison with physical testing. The most common mathematical modeling technique used for crashworthiness assessment is nonlinear Finite Element (FE) modeling. However, a problem with the use of Finite Element Model (FEM) for crashworthiness assessment is inaccessibility to individual researchers, public bodies, small universities and engineering companies due to need for detailed CAD data, software licence costs along with high computational demands. This thesis investigates modeling strategies which are affordable, computationally and labour inexpensive, and could be used by the above-mentioned groups. Use of Lumped Parameter Models (LPM) capable of capturing vehicle parameters contributing to vehicle crashworthiness has been proposed as an alternative to adopting FEM, while the later have been used to validate LPMs developed in this thesis.
The main crash scenario analysed is a full frontal impact against a rigid barrier. Front-end deformation which can be used to measure crash energy absorption and pitching which could lead to occupant injuries in a frontal crash event are parameters focused on. The thesis investigates two types of vehicles; vehicle with initial structure intact is defined as baseline vehicle, while a vehicle that underwent unprofessional repairs on its structural members made of Ultra High Strength Steel (UHSS) is defined as a modified vehicle.
The proposed novel LPM for a baseline vehicle impact is inspired by pendulum motion and expresses the system using Lagrangian formulation to predict the two phases of impact: front-end deformation and vehicle pitching.
Changes in crashworthiness performance of a modified vehicle were investigated with a FEM; tensile tests on UHSS coupons were conducted to generate material inputs for this FEM. Further, a full scale crash test was conducted to validate the FE simulations. An LPM to conduct crashworthiness assessment of a modified vehicle has been proposed, it is based on a double pendulum with a torsional spring representing the vehicle undergoing a full frontal impact.publishedVersio
Towards Real-World Data Streams for Deep Continual Learning
Continual Learning deals with Artificial Intelligent agents striving to learn from an ever-ending
stream of data. Recently, Deep Continual Learning focused on the design of new strategies to
endow Artificial Neural Networks with the ability to learn continuously without forgetting previous
knowledge. In fact, the learning process of any Artificial Neural Network model is well-known to
lack the sufficient stability to preserve existing knowledge when learning new information. This
phenomenon, called catastrophic forgetting or simply forgetting, is considered one of the main
obstacles for the design of effective Continual Learning agents. However, existing strategies designed
to mitigate forgetting have been evaluated on a restricted set of Continual Learning scenarios. The
most used one is, by far, the Class-Incremental scenario applied on object detection tasks. Even
though it drove interest in Continual Learning, Class-Incremental scenarios strongly constraint the
properties of the data stream, thus limiting its ability to model real-world environments.
The core of this thesis concerns the introduction of three Continual Learning data streams, whose
design is centered around specific real-world environments properties. First, we propose the Class-
Incremental with Repetition scenario, which builds a data stream including both the introduction
of new concepts and the repetition of previous ones. Repetition is naturally present in many
environments and it constitutes an important source of information. Second, we formalize the
Continual Pre-Training scenario, which leverages a data stream of unstructured knowledge to keep
a pre-trained model updated over time. One important objective of this scenario is to study how to
continuously build general, robust representations that does not strongly depend on the specific task
to be solved. This is a fundamental property of real-world agents, which build cross-task knowledge
and then adapts it to specific needs. Third, we study Continual Learning scenarios where data
streams are composed by temporally-correlated data. Temporal correlation is ubiquitous and lies
at the foundation of most environments we, as humans, experience during our life. We leverage
Recurrent Neural Networks as our main model, due to their intrinsic ability to model temporal
correlations. We discovered that, when applied to recurrent models, Continual Learning strategies
behave in an unexpected manner. This highlights the limits of the current experimental validation,
mostly focused on Computer Vision tasks.
Ultimately, the introduction of new data streams contributed to deepen our understanding of
how Artificial Neural Networks learn continuously. We discover that forgetting strongly depends
on the properties of the data stream and we observed large changes from one data stream to
another. Moreover, when forgetting is mild, we were able to effectively mitigate it with simple
strategies, or even without any specific ones. Loosening the focus on forgetting allows us to turn our
attention to other interesting problems, outlined in this thesis, like (i) separation between continual
representation learning and quick adaptation to novel tasks, (ii) robustness to unbalanced data
streams and (iii) ability to continuously learn temporal correlations. These objectives currently
defy existing strategies and will likely represent the next challenge for Continual Learning research
Advancing Robot Autonomy for Long-Horizon Tasks
Autonomous robots have real-world applications in diverse fields, such as
mobile manipulation and environmental exploration, and many such tasks benefit
from a hands-off approach in terms of human user involvement over a long task
horizon. However, the level of autonomy achievable by a deployment is limited
in part by the problem definition or task specification required by the system.
Task specifications often require technical, low-level information that is
unintuitive to describe and may result in generic solutions, burdening the user
technically both before and after task completion. In this thesis, we aim to
advance task specification abstraction toward the goal of increasing robot
autonomy in real-world scenarios. We do so by tackling problems that address
several different angles of this goal. First, we develop a way for the
automatic discovery of optimal transition points between subtasks in the
context of constrained mobile manipulation, removing the need for the human to
hand-specify these in the task specification. We further propose a way to
automatically describe constraints on robot motion by using demonstrated data
as opposed to manually-defined constraints. Then, within the context of
environmental exploration, we propose a flexible task specification framework,
requiring just a set of quantiles of interest from the user that allows the
robot to directly suggest locations in the environment for the user to study.
We next systematically study the effect of including a robot team in the task
specification and show that multirobot teams have the ability to improve
performance under certain specification conditions, including enabling
inter-robot communication. Finally, we propose methods for a communication
protocol that autonomously selects useful but limited information to share with
the other robots.Comment: PhD dissertation. 160 page
Ionosphere Monitoring with Remote Sensing
This book focuses on the characterization of the physical properties of the Earth’s ionosphere, contributing to unveiling the nature of several processes responsible for a plethora of space weather-related phenomena taking place in a wide range of spatial and temporal scales. This is made possible by the exploitation of a huge amount of high-quality data derived from both remote sensing and in situ facilities such as ionosondes, radars, satellites and Global Navigation Satellite Systems receivers
- …