561 research outputs found
Regularized linear system identification using atomic, nuclear and kernel-based norms: the role of the stability constraint
Inspired by ideas taken from the machine learning literature, new
regularization techniques have been recently introduced in linear system
identification. In particular, all the adopted estimators solve a regularized
least squares problem, differing in the nature of the penalty term assigned to
the impulse response. Popular choices include atomic and nuclear norms (applied
to Hankel matrices) as well as norms induced by the so called stable spline
kernels. In this paper, a comparative study of estimators based on these
different types of regularizers is reported. Our findings reveal that stable
spline kernels outperform approaches based on atomic and nuclear norms since
they suitably embed information on impulse response stability and smoothness.
This point is illustrated using the Bayesian interpretation of regularization.
We also design a new class of regularizers defined by "integral" versions of
stable spline/TC kernels. Under quite realistic experimental conditions, the
new estimators outperform classical prediction error methods also when the
latter are equipped with an oracle for model order selection
Maximum Entropy Vector Kernels for MIMO system identification
Recent contributions have framed linear system identification as a
nonparametric regularized inverse problem. Relying on -type
regularization which accounts for the stability and smoothness of the impulse
response to be estimated, these approaches have been shown to be competitive
w.r.t classical parametric methods. In this paper, adopting Maximum Entropy
arguments, we derive a new penalty deriving from a vector-valued
kernel; to do so we exploit the structure of the Hankel matrix, thus
controlling at the same time complexity, measured by the McMillan degree,
stability and smoothness of the identified models. As a special case we recover
the nuclear norm penalty on the squared block Hankel matrix. In contrast with
previous literature on reweighted nuclear norm penalties, our kernel is
described by a small number of hyper-parameters, which are iteratively updated
through marginal likelihood maximization; constraining the structure of the
kernel acts as a (hyper)regularizer which helps controlling the effective
degrees of freedom of our estimator. To optimize the marginal likelihood we
adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be
significantly computationally cheaper than other first and second order
off-the-shelf optimization methods. The paper also contains an extensive
comparison with many state-of-the-art methods on several Monte-Carlo studies,
which confirms the effectiveness of our procedure
Quantifying and Reducing Uncertainty in Metal-Based Additive Manufacturing Laser Powder-Bed Fusion Processes
Laser Powder-Bed Fusion processes capable of processing metallic materials are a set of relatively
new and emerging Additive Manufacturing technologies that offer attractive potential and
capabilities (e.g., design freedom, part consolidation and reduced material waste). Although they
provide an exceptional advantage that cannot be matched by other traditional manufacturing processes,
the path to widespread use of these new technologies still include some obstacles due to
the limited understanding and intricate problems that the manufacturing process presents, such as
low repeatability and low part quality compared to their conventional manufacturing counterparts.
This dissertation presents one of the first applications of different formal tools and frameworks
from a combination of scientific fields including Uncertainty Quantification, Statistics, Probability
and Data Science, into different problems within Additive Manufacturing Laser Powder-Bed
Fusion processes. Specifically, modeling techniques such as Gaussian Processes and generalized
Polynomial Chaos Expansions are employed to optimize porosity in printed parts, calibrate and
validate different computer simulation models, and identify processing regions for satisfactory
manufacturing. Proper analysis of these techniques is undertaken and its validation is successfully
presented such that informed and knowledgeable perspectives about the manufacturing process are
gained to better understand it. In turn, these new insights and understanding translate into improvement
and advancement of Additive Manufacturing, and contribute towards its further growth and
consolidation as a competitive and qualified technology within the manufacturing industry
Evaluation of Generative Models for Predicting Microstructure Geometries in Laser Powder Bed Fusion Additive Manufacturing
In-situ process monitoring for metals additive manufacturing is paramount to the successful build of an object for application in extreme or high stress environments. In selective laser melting additive manufacturing, the process by which a laser melts metal powder during the build will dictate the internal microstructure of that object once the metal cools and solidifies. The difficulty lies in that obtaining enough variety of data to quantify the internal microstructures for the evaluation of its physical properties is problematic, as the laser passes at high speeds over powder grains at a micrometer scale. Imaging the process in-situ is complex and cost-prohibitive. However, generative modes can provide new artificially generated data. Generative adversarial networks synthesize new computationally derived data through a process that learns the underlying features corresponding to the different laser process parameters in a generator network, then improves upon those artificial renderings by evaluating through the discriminator network. While this technique was effective at delivering high-quality images, modifications to the network through conditions showed improved capabilities at creating these new images. Using multiple evaluation metrics, it has been shown that generative models can be used to create new data for various laser process parameter combinations, thereby allowing a more comprehensive evaluation of ideal laser conditions for any particular build
Quantifying and Reducing Uncertainty in Metal-Based Additive Manufacturing Laser Powder-Bed Fusion Processes
Laser Powder-Bed Fusion processes capable of processing metallic materials are a set of relatively
new and emerging Additive Manufacturing technologies that offer attractive potential and
capabilities (e.g., design freedom, part consolidation and reduced material waste). Although they
provide an exceptional advantage that cannot be matched by other traditional manufacturing processes,
the path to widespread use of these new technologies still include some obstacles due to
the limited understanding and intricate problems that the manufacturing process presents, such as
low repeatability and low part quality compared to their conventional manufacturing counterparts.
This dissertation presents one of the first applications of different formal tools and frameworks
from a combination of scientific fields including Uncertainty Quantification, Statistics, Probability
and Data Science, into different problems within Additive Manufacturing Laser Powder-Bed
Fusion processes. Specifically, modeling techniques such as Gaussian Processes and generalized
Polynomial Chaos Expansions are employed to optimize porosity in printed parts, calibrate and
validate different computer simulation models, and identify processing regions for satisfactory
manufacturing. Proper analysis of these techniques is undertaken and its validation is successfully
presented such that informed and knowledgeable perspectives about the manufacturing process are
gained to better understand it. In turn, these new insights and understanding translate into improvement
and advancement of Additive Manufacturing, and contribute towards its further growth and
consolidation as a competitive and qualified technology within the manufacturing industry
Recommended from our members
Improvements in the robustness and accuracy of bioluminescence tomographic reconstructions of distributed sources within small animals
High quality three-dimensional bioluminescence tomographic (BLT) images, if available, would constitute a major advance and provide much more useful information than the two-dimensional bioluminescence images that are frequently used today. To-date, high quality BLT images have not been available, largely because of the poor quality of the data being input into the reconstruction process. Many significant confounds are not routinely corrected for and the noise in this data is unnecessarily large and poorly distributed. Moreover, many of the design choices affecting image quality are not well considered, including choices regarding the number and type of filters used when making multispectral measurements and choices regarding the frequency and uniformity of the sampling of both the range and domain of the BLT inverse problem. Finally, progress in BLT image quality is difficult to gauge owing to a lack of realistic gold-standard references that engage the full complexity and uncertainty within a small animal BLT imaging experiment.
Within this dissertation, I address all of these issues. I develop a Cerenkov-based gold-standard wherein a Positron Emission Tomography (PET) image can be used to gauge improvements in the accuracy of BLT reconstruction algorithms. In the process of creating this reference, I discover and describe corrections for several confounds that if left uncorrected would introduce artifacts into the BLT images. This includes corrections for the angle of the animal’s skin surface relative to the camera, for the height of each point on the skin surface relative to the focal plane, and for the variation in bioluminescence intensity as a function of luciferin concentration over time. Once applied, I go on to derive equations and algorithms that when employed are able to minimize the noise in the final images under the constraints of a multispectral BLT data acquisition. These equations and algorithms allow for an optimal choice of filters to be made and for the acquisition time to be optimally distributed among those filtered measurements. These optimizations make use of Barrett’s and Moore-Penrose pseudoinverse matrices which also come into play in a paradigm I describe that can be used to guide choices regarding sampling of the domain and range
- …