5,062 research outputs found
Approach to a Decision Support Method for Feature Engineering of a Classification of Hydraulic Directional Control Valve Tests
Advancing digitalization and high computing power are drivers for the progressive use of machine learning (ML) methods on manufacturing data. Using ML for predictive quality control of product characteristics contributes to preventing defects and streamlining future manufacturing processes. Challenging decisions must be made before implementing ML applications. Production environments are dynamic systems whose boundary conditions change continuously. Accordingly, it requires extensive feature engineering of the volatile database to guarantee high generalizability of the prediction model. Thus, all following sections of the ML pipeline can be optimized based on a cleaned database. Various ML methods such gradient boosting methods have achieved promising results in industrial hydraulic use cases so far. For every prediction model task, there is the challenge of making the right choice of which method is most appropriate and which hyperparameters achieve the best predictions. The goal of this work is to develop a method for selecting the best feature engineering methods and hyperparameter combination of a predictive model for a dataset with temporal variability that treats both as equivalent parameters and optimizes them simultaneously. The optimization is done via a workflow including a random search. By applying this method, a structured procedure for achieving significant leaps in performance metrics in the prediction of hydraulic test steps of directional valves is achieved
Sparse Modeling for Image and Vision Processing
In recent years, a large amount of multi-disciplinary research has been
conducted on sparse models and their applications. In statistics and machine
learning, the sparsity principle is used to perform model selection---that is,
automatically selecting a simple model among a large collection of them. In
signal processing, sparse coding consists of representing data with linear
combinations of a few dictionary elements. Subsequently, the corresponding
tools have been widely adopted by several scientific communities such as
neuroscience, bioinformatics, or computer vision. The goal of this monograph is
to offer a self-contained view of sparse modeling for visual recognition and
image processing. More specifically, we focus on applications where the
dictionary is learned and adapted to data, yielding a compact representation
that has been successful in various contexts.Comment: 205 pages, to appear in Foundations and Trends in Computer Graphics
and Visio
An optimally concentrated Gabor transform for localized time-frequency components
Gabor analysis is one of the most common instances of time-frequency signal
analysis. Choosing a suitable window for the Gabor transform of a signal is
often a challenge for practical applications, in particular in audio signal
processing. Many time-frequency (TF) patterns of different shapes may be
present in a signal and they can not all be sparsely represented in the same
spectrogram. We propose several algorithms, which provide optimal windows for a
user-selected TF pattern with respect to different concentration criteria. We
base our optimization algorithm on -norms as measure of TF spreading. For
a given number of sampling points in the TF plane we also propose optimal
lattices to be used with the obtained windows. We illustrate the potentiality
of the method on selected numerical examples
Microstructure modeling and crystal plasticity parameter identification for predicting the cyclic mechanical behavior of polycrystalline metals
Computational homogenization permits to capture the influence of the microstructure on the cyclic mechanical behavior of polycrystalline metals. In this work we investigate methods to compute Laguerre tessellations as computational cells of polycrystalline microstructures, propose a new method to assign crystallographic orientations to the Laguerre cells and use Bayesian optimization to find suitable parameters for the underlying micromechanical model from macroscopic experiments
emgr - The Empirical Gramian Framework
System Gramian matrices are a well-known encoding for properties of
input-output systems such as controllability, observability or minimality.
These so-called system Gramians were developed in linear system theory for
applications such as model order reduction of control systems. Empirical
Gramian are an extension to the system Gramians for parametric and nonlinear
systems as well as a data-driven method of computation. The empirical Gramian
framework - emgr - implements the empirical Gramians in a uniform and
configurable manner, with applications such as Gramian-based (nonlinear) model
reduction, decentralized control, sensitivity analysis, parameter
identification and combined state and parameter reduction
Euclidean Distance Matrices: Essential Theory, Algorithms and Applications
Euclidean distance matrices (EDM) are matrices of squared distances between
points. The definition is deceivingly simple: thanks to their many useful
properties they have found applications in psychometrics, crystallography,
machine learning, wireless sensor networks, acoustics, and more. Despite the
usefulness of EDMs, they seem to be insufficiently known in the signal
processing community. Our goal is to rectify this mishap in a concise tutorial.
We review the fundamental properties of EDMs, such as rank or
(non)definiteness. We show how various EDM properties can be used to design
algorithms for completing and denoising distance data. Along the way, we
demonstrate applications to microphone position calibration, ultrasound
tomography, room reconstruction from echoes and phase retrieval. By spelling
out the essential algorithms, we hope to fast-track the readers in applying
EDMs to their own problems. Matlab code for all the described algorithms, and
to generate the figures in the paper, is available online. Finally, we suggest
directions for further research.Comment: - 17 pages, 12 figures, to appear in IEEE Signal Processing Magazine
- change of title in the last revisio
Application of general semi-infinite Programming to Lapidary Cutting Problems
We consider a volume maximization problem arising in gemstone cutting industry. The problem is formulated as a general semi-infinite program (GSIP) and solved using an interiorpoint method developed by Stein. It is shown, that the convexity assumption needed for the convergence of the algorithm can be satisfied by appropriate modelling. Clustering techniques are used to reduce the number of container constraints, which is necessary to make the subproblems practically tractable. An iterative process consisting of GSIP optimization and adaptive refinement steps is then employed to obtain an optimal solution which is also feasible for the original problem. Some numerical results based on realworld data are also presented
- …