9 research outputs found

    Tree ensemble kernels for Bayesian optimization with known constraints over mixed-feature spaces

    Get PDF
    Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search, as they achieve good predictive performance with little or no manual tuning, naturally handle discrete feature spaces, and are relatively insensitive to outliers in the training data. Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function. To address both points simultaneously, we propose using the kernel interpretation of tree ensembles as a Gaussian Process prior to obtain model variance estimates, and we develop a compatible optimization formulation for the acquisition function. The latter further allows us to seamlessly integrate known constraints to improve sampling efficiency by considering domain-knowledge in engineering settings and modeling search space symmetries, e.g., hierarchical relationships in neural architecture search. Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.Comment: 27 pages, 9 figures, 4 table

    Multi-Class Image Segmentation via Convex and Biconvex Optimization

    No full text
    This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize non-smooth optimization algorithms. The topic of the first part, namely unsupervised segmentation, is the application of clustering to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that fuzzy c-means can be seen as an approximation of c-means in terms of power means. Since noise is omnipresent in our image data, these simple clustering models are not suitable for its segmentation. To this end, we introduce a general and finite dimensional segmentation model that consists of a data term stemming from the aforementioned clustering models plus a continuous regularization term. We tackle this optimization model via an alternating minimiza- tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the segment membership of the pixels and vice versa. In this general setting, we prove convergence in the sense of set-valued algorithms using Zangwill’s Theory [172]. Further, we present a segmentation model with a total variation regularizer. While updating the cluster centers is straightforward for fixed segment memberships of the pixels, updating the segment membership can be solved iteratively via non-smooth, convex optimization. Thereby, we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as we have a certain amount of decrease in the objective functional to increase the efficiency. This algorithm is a particular implementation of RcC providing also the corresponding convergence theory. Moreover, we show the good performance of our method in various examples such as simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we exploit the property of the latter material that two components have no common boundary in our adapted model. The second part of the thesis is concerned with supervised segmentation. We leave the area of center based models and investigate convex approaches related to graph p-Laplacians and reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to construct the graph. In practical experiments we show on the one hand image types that are better segmented by the p-Laplacian model and on the other hand images that are better segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach provides smoother results, while the RKHS approach provides often more accurate and detailed segmentations. Finally, we propose a novel combination of both approaches to benefit from the advantages of both models and study the performance on challenging medical image data

    Energieoptimale Steuerung von Industrierobotern

    No full text

    Multi-Class Image Segmentation via Convex and Biconvex Optimization

    Get PDF
    This thesis is divided into two parts. Both cope with multi-class image segmentation and utilize non-smooth optimization algorithms. The topic of the first part, namely unsupervised segmentation, is the application of clustering to image pixels. Therefore, we start with an introduction of the biconvex center-based clustering algorithms c-means and fuzzy c-means, where c denotes the number of classes. We show that fuzzy c-means can be seen as an approximation of c-means in terms of power means. Since noise is omnipresent in our image data, these simple clustering models are not suitable for its segmentation. To this end, we introduce a general and finite dimensional segmentation model that consists of a data term stemming from the aforementioned clustering models plus a continuous regularization term. We tackle this optimization model via an alternating minimiza- tion approach called regularized c-centers (RcC). Thereby, we fix the centers and optimize the segment membership of the pixels and vice versa. In this general setting, we prove convergence in the sense of set-valued algorithms using Zangwill’s Theory [172]. Further, we present a segmentation model with a total variation regularizer. While updating the cluster centers is straightforward for fixed segment memberships of the pixels, updating the segment membership can be solved iteratively via non-smooth, convex optimization. Thereby, we do not iterate a convex optimization algorithm until convergence. Instead, we stop as soon as we have a certain amount of decrease in the objective functional to increase the efficiency. This algorithm is a particular implementation of RcC providing also the corresponding convergence theory. Moreover, we show the good performance of our method in various examples such as simulated 2d images of brain tissue and 3d volumes of two materials, namely a multi-filament composite superconductor and a carbon fiber reinforced silicon carbide ceramics. Thereby, we exploit the property of the latter material that two components have no common boundary in our adapted model. The second part of the thesis is concerned with supervised segmentation. We leave the area of center based models and investigate convex approaches related to graph p-Laplacians and reproducing kernel Hilbert spaces (RKHSs). We study the effect of different weights used to construct the graph. In practical experiments we show on the one hand image types that are better segmented by the p-Laplacian model and on the other hand images that are better segmented by the RKHS-based approach. This is due to the fact that the p-Laplacian approach provides smoother results, while the RKHS approach provides often more accurate and detailed segmentations. Finally, we propose a novel combination of both approaches to benefit from the advantages of both models and study the performance on challenging medical image data

    Supervised and Transductive Multi-Class Segmentation Using p-Laplacians and RKHS methods

    Get PDF
    This paper considers supervised multi-class image segmentation: from a labeled set of pixels in one image, we learn the segmentation and apply it to the rest of the image or to other similar images. We study approaches with p-Laplacians, (vector-valued) Reproducing Kernel Hilbert Spaces (RKHSs) and combinations of both. In all approaches we construct segment membership vectors. In the p-Laplacian model the segment membership vectors have to fulfill a certain probability simplex constraint. Interestingly, we could prove that this is not really a constraint in the case p=2 but is automatically fulfilled. While the 2-Laplacian model gives a good general segmentation, the case of the 1-Laplacian tends to neglect smaller segments. The RKHS approach has the benefit of fast computation. This direction is motivated by image colorization, where a given dab of color is extended to a nearby region of similar features or to another image. The connection between colorization and multi-class segmentation is explored in this paper with an application to medical image segmentation. We further consider an improvement using a combined method. Each model is carefully considered with numerical experiments for validation, followed by medical image segmentation at the end

    Homogeneous Penalizers and Constraints in Convex Image Restoration

    No full text
    Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form argmin{Φ(x){\rm argmin} \{ \Phi(x) subject to Ψ(x)τ}\Psi(x) \le \tau \} and their penalized counterparts argmin{Φ(x)+λΨ(x)}{\rm argmin} \{\Phi(x) + \lambda \Psi(x)\}. We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting Ψ:=L\Psi := \| L \cdot\| with LRm,nL \in \mathbb{R}^{m,n} and Φ:=φ(H)\Phi := \varphi(H \cdot), where HRn,nH \in \mathbb{R}^{n,n} and φ:RnR{+}\varphi: \mathbb R^n \rightarrow \mathbb{R} \cup \{+\infty\} meet certain requirements which are often fulfilled in image processing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if τ\tau and λ\lambda are in the graph of this function. We illustrate the relation between τ\tau and λ\lambda for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the II-divergence as data fitting term φ\varphi and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level

    Supervised and Transductive Multi-Class Segmentation Using p-Laplacians and RKHS methods

    No full text
    This paper considers supervised multi-class image segmentation: from a labeled set of pixels in one image, we learn the segmentation and apply it to the rest of the image or to other similar images. We study approaches with p-Laplacians, (vector-valued) Reproducing Kernel Hilbert Spaces (RKHSs) and combinations of both. In all approaches we construct segment membership vectors. In the p-Laplacian model the segment membership vectors have to fulfill a certain probability simplex constraint. Interestingly, we could prove that this is not really a constraint in the case p=2 but is automatically fulfilled. While the 2-Laplacian model gives a good general segmentation, the case of the 1-Laplacian tends to neglect smaller segments. The RKHS approach has the benefit of fast computation. This direction is motivated by image colorization, where a given dab of color is extended to a nearby region of similar features or to another image. The connection between colorization and multi-class segmentation is explored in this paper with an application to medical image segmentation. We further consider an improvement using a combined method. Each model is carefully considered with numerical experiments for validation, followed by medical image segmentation at the end

    Homogeneous Penalizers and Constraints in Convex Image Restoration

    No full text
    Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form min{Φ(x)min\{\Phi(x) subject to Ψ(x)τ}\Psi(x)\le\tau\} and their non-constrained, penalized counterparts min{Φ(x)+λΨ(x)}min\{\Phi(x)+\lambda\Psi(x)\}. We start with general considerations of the topic and provide a novel proof which ensures that a solution of the constrained problem with given τ\tau is also a solution of the on-constrained problem for a certain λ\lambda. Then we deal with the special setting that Ψ\Psi is a semi-norm and Φ=ϕ(Hx)\Phi=\phi(Hx), where HH is a linear, not necessarily invertible operator and ϕ\phi is essentially smooth and strictly convex. In this case we can prove via the dual problems that there exists a bijective function which maps τ\tau from a certain interval to λ\lambda such that the solutions of the constrained problem coincide with those of the non-constrained problem if and only if τ\tau and λ\lambda are in the graph of this function. We illustrate the relation between τ\tau and λ\lambda by various problems arising in image processing. In particular, we demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise and in inpainting models with constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level

    Homogeneous Penalizers and Constraints in Convex Image Restoration

    Get PDF
    Recently convex optimization models were successfully applied for solving various problems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form min{Φ(x)min\{\Phi(x) subject to Ψ(x)τ}\Psi(x)\le\tau\} and their non-constrained, penalized counterparts min{Φ(x)+λΨ(x)}min\{\Phi(x)+\lambda\Psi(x)\}. We start with general considerations of the topic and provide a novel proof which ensures that a solution of the constrained problem with given τ\tau is also a solution of the on-constrained problem for a certain λ\lambda. Then we deal with the special setting that Ψ\Psi is a semi-norm and Φ=ϕ(Hx)\Phi=\phi(Hx), where HH is a linear, not necessarily invertible operator and ϕ\phi is essentially smooth and strictly convex. In this case we can prove via the dual problems that there exists a bijective function which maps τ\tau from a certain interval to λ\lambda such that the solutions of the constrained problem coincide with those of the non-constrained problem if and only if τ\tau and λ\lambda are in the graph of this function. We illustrate the relation between τ\tau and λ\lambda by various problems arising in image processing. In particular, we demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise and in inpainting models with constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level
    corecore