18,300 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Quantifying and Explaining Machine Learning Uncertainty in Predictive Process Monitoring: An Operations Research Perspective
This paper introduces a comprehensive, multi-stage machine learning
methodology that effectively integrates information systems and artificial
intelligence to enhance decision-making processes within the domain of
operations research. The proposed framework adeptly addresses common
limitations of existing solutions, such as the neglect of data-driven
estimation for vital production parameters, exclusive generation of point
forecasts without considering model uncertainty, and lacking explanations
regarding the sources of such uncertainty. Our approach employs Quantile
Regression Forests for generating interval predictions, alongside both local
and global variants of SHapley Additive Explanations for the examined
predictive process monitoring problem. The practical applicability of the
proposed methodology is substantiated through a real-world production planning
case study, emphasizing the potential of prescriptive analytics in refining
decision-making procedures. This paper accentuates the imperative of addressing
these challenges to fully harness the extensive and rich data resources
accessible for well-informed decision-making
CoRe-Sleep: A Multimodal Fusion Framework for Time Series Robust to Imperfect Modalities
Sleep abnormalities can have severe health consequences. Automated sleep
staging, i.e. labelling the sequence of sleep stages from the patient's
physiological recordings, could simplify the diagnostic process. Previous work
on automated sleep staging has achieved great results, mainly relying on the
EEG signal. However, often multiple sources of information are available beyond
EEG. This can be particularly beneficial when the EEG recordings are noisy or
even missing completely. In this paper, we propose CoRe-Sleep, a Coordinated
Representation multimodal fusion network that is particularly focused on
improving the robustness of signal analysis on imperfect data. We demonstrate
how appropriately handling multimodal information can be the key to achieving
such robustness. CoRe-Sleep tolerates noisy or missing modalities segments,
allowing training on incomplete data. Additionally, it shows state-of-the-art
performance when testing on both multimodal and unimodal data using a single
model on SHHS-1, the largest publicly available study that includes sleep stage
labels. The results indicate that training the model on multimodal data does
positively influence performance when tested on unimodal data. This work aims
at bridging the gap between automated analysis tools and their clinical
utility.Comment: 10 pages, 4 figures, 2 tables, journa
UniverSeg: Universal Medical Image Segmentation
While deep learning models have become the predominant method for medical
image segmentation, they are typically not capable of generalizing to unseen
segmentation tasks involving new anatomies, image modalities, or labels. Given
a new segmentation task, researchers generally have to train or fine-tune
models, which is time-consuming and poses a substantial barrier for clinical
researchers, who often lack the resources and expertise to train neural
networks. We present UniverSeg, a method for solving unseen medical
segmentation tasks without additional training. Given a query image and example
set of image-label pairs that define a new segmentation task, UniverSeg employs
a new Cross-Block mechanism to produce accurate segmentation maps without the
need for additional training. To achieve generalization to new tasks, we have
gathered and standardized a collection of 53 open-access medical segmentation
datasets with over 22,000 scans, which we refer to as MegaMedical. We used this
collection to train UniverSeg on a diverse set of anatomies and imaging
modalities. We demonstrate that UniverSeg substantially outperforms several
related methods on unseen tasks, and thoroughly analyze and draw insights about
important aspects of the proposed system. The UniverSeg source code and model
weights are freely available at https://universeg.csail.mit.eduComment: Victor and Jose Javier contributed equally to this work. Project
Website: https://universeg.csail.mit.ed
Vegetation responses to variations in climate: A combined ordinary differential equation and sequential Monte Carlo estimation approach
Vegetation responses to variation in climate are a current research priority in the context of accelerated shifts generated by climate change. However, the interactions between environmental and biological factors still represent one of the largest uncertainties in projections of future scenarios, since the relationship between drivers and ecosystem responses has a complex and nonlinear nature. We aimed to develop a model to study the vegetation’s primary productivity dynamic response to temporal variations in climatic conditions as measured by rainfall, temperature and radiation. Thus, we propose a new way to estimate the vegetation response to climate via a non-autonomous version of a classical growth curve, with a time-varying growth rate and carrying capacity parameters according to climate variables. With a Sequential Monte Carlo Estimation to account for complexities in the climate-vegetation relationship to minimize the number of parameters. The model was applied to six key sites identified in a previous study, consisting of different arid and semiarid rangelands from North Patagonia, Argentina. For each site, we selected the time series of MODIS NDVI, and climate data from ERA5 Copernicus hourly reanalysis from 2000 to 2021. After calculating the time series of the a posteriori distribution of parameters, we analyzed the explained capacity of the model in terms of the linear coefficient of determination and
the parameters distribution variation. Results showed that most rangelands recorded changes in their sensitivity over time to climatic factors, but vegetation responses were heterogeneous and influenced by different drivers. Differences in this climate-vegetation relationship were recorded among different cases: (1) a marginal and decreasing sensitivity to temperature and radiation, respectively, but a high sensitivity to water availability; (2) high and increasing sensitivity to temperature and water availability, respectively; and (3) a case with an abrupt shift in vegetation dynamics driven by a progressively decreasing sensitivity to water availability, without any
changes in the sensitivity either to temperature or radiation. Finally, we also found that the time scale, in which the ecosystem integrated the rainfall phenomenon in terms of the width of the window function used to convolve the rainfall series into a water availability variable, was also variable in time. This approach allows us to estimate the connection degree between ecosystem productivity and climatic variables. The capacity of the model to identify changes over time in the vegetation-climate relationship might inform decision-makers about ecological transitions and the differential impact of climatic drivers on ecosystems.Estación Experimental Agropecuaria BarilocheFil: Bruzzone, Octavio Augusto. Instituto Nacional de Tecnología Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche; ArgentinaFil: Bruzzone, Octavio Augusto. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; ArgentinaFil: Perri, Daiana Vanesa. Instituto Nacional de Tecnologia Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche. Área de Recursos Naturales; ArgentinaFil: Perri, Daiana Vanesa. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; ArgentinaFil: Easdale, Marcos Horacio. Instituto Nacional de Tecnologia Agropecuaria (INTA). Estación Experimental Agropecuaria Bariloche. Área de Recursos Naturales; ArgentinaFil: Easdale, Marcos Horacio. Consejo Nacional de Investigaciones Cientificas y Tecnicas. Instituto de Investigaciones Forestales y Agropecuarias Bariloche; Argentin
Diffusion Schr\"odinger Bridge Matching
Solving transport problems, i.e. finding a map transporting one given
distribution to another, has numerous applications in machine learning. Novel
mass transport methods motivated by generative modeling have recently been
proposed, e.g. Denoising Diffusion Models (DDMs) and Flow Matching Models
(FMMs) implement such a transport through a Stochastic Differential Equation
(SDE) or an Ordinary Differential Equation (ODE). However, while it is
desirable in many applications to approximate the deterministic dynamic Optimal
Transport (OT) map which admits attractive properties, DDMs and FMMs are not
guaranteed to provide transports close to the OT map. In contrast,
Schr\"odinger bridges (SBs) compute stochastic dynamic mappings which recover
entropy-regularized versions of OT. Unfortunately, existing numerical methods
approximating SBs either scale poorly with dimension or accumulate errors
across iterations. In this work, we introduce Iterative Markovian Fitting, a
new methodology for solving SB problems, and Diffusion Schr\"odinger Bridge
Matching (DSBM), a novel numerical algorithm for computing IMF iterates. DSBM
significantly improves over previous SB numerics and recovers as
special/limiting cases various recent transport methods. We demonstrate the
performance of DSBM on a variety of problems
Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review
Globally, the external Internet is increasingly being connected to the
contemporary industrial control system. As a result, there is an immediate need
to protect the network from several threats. The key infrastructure of
industrial activity may be protected from harm by using an intrusion detection
system (IDS), a preventive measure mechanism, to recognize new kinds of
dangerous threats and hostile activities. The most recent artificial
intelligence (AI) techniques used to create IDS in many kinds of industrial
control networks are examined in this study, with a particular emphasis on
IDS-based deep transfer learning (DTL). This latter can be seen as a type of
information fusion that merge, and/or adapt knowledge from multiple domains to
enhance the performance of the target task, particularly when the labeled data
in the target domain is scarce. Publications issued after 2015 were taken into
account. These selected publications were divided into three categories:
DTL-only and IDS-only are involved in the introduction and background, and
DTL-based IDS papers are involved in the core papers of this review.
Researchers will be able to have a better grasp of the current state of DTL
approaches used in IDS in many different types of networks by reading this
review paper. Other useful information, such as the datasets used, the sort of
DTL employed, the pre-trained network, IDS techniques, the evaluation metrics
including accuracy/F-score and false alarm rate (FAR), and the improvement
gained, were also covered. The algorithms, and methods used in several studies,
or illustrate deeply and clearly the principle in any DTL-based IDS subcategory
are presented to the reader
Full Resolution Deconvolution of Complex Faraday Spectra
Polarized synchrotron emission from multiple Faraday depths can be separated
by calculating the complex Fourier transform of the Stokes' parameters as a
function of the wavelength squared, known as Faraday Synthesis. As commonly
implemented, the transform introduces an additional term , which
broadens the real and imaginary spectra, but not the amplitude spectrum. We use
idealized tests to investigate whether additional information can be recovered
with a clean process restoring beam set to the narrower width of the peak in
the real ``full" resolution spectrum with . We find that the
choice makes no difference, except for the use of a smaller
restoring beam. With this smaller beam, the accuracy and phase stability are
unchanged for single Faraday components. However, using the smaller restoring
beam for multiple Faraday components we find a) better discrimination of the
components, b) significant reductions in blending of structures in tomography
images, and c) reduction of spurious features in the Faraday spectra and
tomography maps. We also discuss the limited accuracy of information on scales
comparable to the width of the amplitude spectrum peak, and note a clean-bias,
reducing the recovered amplitudes. We present examples using MeerKAT L-band
data. We also revisit the maximum width in Faraday depth to which surveys are
sensitive, and introduce the variable , the width for which the power
drops by a factor of 2. We find that most surveys cannot resolve continuous
Faraday distributions unless the narrower full restoring beam is used.Comment: 17 pages, 23 figures, accepted for publication in MNRAS, 4 April,
202
Identifying and responding to people with mild learning disabilities in the probation service
It has long been recognised that, like many other individuals, people with learningdisabilities find their way into the criminal justice system. This fact is not disputed. Whathas been disputed, however, is the extent to which those with learning disabilities arerepresented within the various agencies of the criminal justice system and the ways inwhich the criminal justice system (and society) should address this. Recently, social andlegislative confusion over the best way to deal with offenders with learning disabilities andmental health problems has meant that the waters have become even more muddied.Despite current government uncertainty concerning the best way to support offenders withlearning disabilities, the probation service is likely to continue to play a key role in thesupervision of such offenders. The three studies contained herein aim to clarify the extentto which those with learning disabilities are represented in the probation service, toexamine the effectiveness of probation for them and to explore some of the ways in whichprobation could be adapted to fit their needs.Study 1 and study 2 showed that around 10% of offenders on probation in Kent appearedto have an IQ below 75, putting them in the bottom 5% of the general population. Study 3was designed to assess some of the support needs of those with learning disabilities in theprobation service, finding that many of the materials used by the probation service arelikely to be too complex for those with learning disabilities to use effectively. To addressthis, a model for service provision is tentatively suggested. This is based on the findings ofthe three studies and a pragmatic assessment of what the probation service is likely to becapable of achieving in the near future
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
- …