894 research outputs found
Robust mixtures in the presence of measurement errors
We develop a mixture-based approach to robust density modeling and outlier
detection for experimental multivariate data that includes measurement error
information. Our model is designed to infer atypical measurements that are not
due to errors, aiming to retrieve potentially interesting peculiar objects.
Since exact inference is not possible in this model, we develop a
tree-structured variational EM solution. This compares favorably against a
fully factorial approximation scheme, approaching the accuracy of a
Markov-Chain-EM, while maintaining computational simplicity. We demonstrate the
benefits of including measurement errors in the model, in terms of improved
outlier detection rates in varying measurement uncertainty conditions. We then
use this approach in detecting peculiar quasars from an astrophysical survey,
given photometric measurements with errors.Comment: (Refereed) Proceedings of the 24-th Annual International Conference
on Machine Learning 2007 (ICML07), (Ed.) Z. Ghahramani. June 20-24, 2007,
Oregon State University, Corvallis, OR, USA, pp. 847-854; Omnipress. ISBN
978-1-59593-793-3; 8 pages, 6 figure
Electro-spinning/netting: A strategy for the fabrication of three-dimensional polymer nano-fiber/nets.
Since 2006, a rapid development has been achieved in a subject area, so called electro-spinning/netting (ESN), which comprises the conventional electrospinning process and a unique electro-netting process. Electro-netting overcomes the bottleneck problem of electrospinning technique and provides a versatile method for generating spider-web-like nano-nets with ultrafine fiber diameter less than 20 nm. Nano-nets, supported by the conventional electrospun nanofibers in the nano-fiber/nets (NFN) membranes, exhibit numerious attractive characteristics such as extremely small diameter, high porosity, and Steiner tree network geometry, which make NFN membranes optimal candidates for many significant applications. The progress made during the last few years in the field of ESN is highlighted in this review, with particular emphasis on results obtained in the author's research units. After a brief description of the development of the electrospinning and ESN techniques, several fundamental properties of NFN nanomaterials are addressed. Subsequently, the used polymers and the state-of-the-art strategies for the controllable fabrication of NFN membranes are highlighted in terms of the ESN process. Additionally, we highlight some potential applications associated with the remarkable features of NFN nanostructure. Our discussion is concluded with some personal perspectives on the future development in which this wonderful technique could be pursued
MOEA/D with Adaptive Weight Adjustment
Recently, MOEA/D (multi-objective evolutionary algorithm based on decomposition) has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem (MOP) into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front (PF; i.e., discontinuous PF or PF with sharp peak or low tail). To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment (MOEA/D-AWA). According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, [Formula: see text]-MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.</jats:p
Continuous Ultrasound Speckle Tracking with Gaussian Mixtures
Speckle tracking echocardiography (STE) is now widely used for measuring strain, deformations, and motion in cardiology. STE involves three successive steps: acquisition of individual frames, speckle detection, and image registration using speckles as landmarks. This work proposes to avoid explicit detection and registration by representing dynamic ultrasound images as sparse collections of moving Gaussian elements in the continuous joint space-time space. Individual speckles or local clusters of speckles are approximated by a single multivariate Gaussian kernel with associated linear
trajectory over a short time span. A hierarchical tree-structured model is fitted to sampled input data such that predicted image estimates can be retrieved by regression after reconstruction, allowing a (bias-variance) trade-off between model complexity and image resolution. The inverse image reconstruction problem is solved with an online Bayesian statistical estimation algorithm. Experiments on clinical data could estimate subtle sub-pixel accurate motion that is difficult to capture with frame-to-frame elastic image registration techniques
Meta-heuristic combining prior online and offline information for the quadratic assignment problem
The construction of promising solutions for NP-hard combinatorial optimization problems (COPs) in meta-heuristics is usually based on three types of information, namely a priori information, a posteriori information learned from visited solutions during the search procedure, and online information collected in the solution construction process. Prior information reflects our domain knowledge about the COPs. Extensive domain knowledge can surely make the search effective, yet it is not always available. Posterior information could guide the meta-heuristics to globally explore promising search areas, but it lacks local guidance capability. On the contrary, online information can capture local structures, and its application can help exploit the search space. In this paper, we studied the effects of using this information on metaheuristic's algorithmic performances for the COPs. The study was illustrated by a set of heuristic algorithms developed for the quadratic assignment problem. We first proposed an improved scheme to extract online local information, then developed a unified framework under which all types of information can be combined readily. Finally, we studied the benefits of the three types of information to meta-heuristics. Conclusions were drawn from the comprehensive study, which can be used as principles to guide the design of effective meta-heuristic in the future
Estimates on compressed neural networks regression
When the neural element number nn of neural networks is larger than the sample size mm, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection AA which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given
- …
