35,366 research outputs found
Latent class analysis for segmenting preferences of investment bonds
Market segmentation is a key component of conjoint analysis which addresses consumer
preference heterogeneity. Members in a segment are assumed to be homogenous in their
views and preferences when worthing an item but distinctly heterogenous to members of other
segments. Latent class methodology is one of the several conjoint segmentation procedures
that overcome the limitations of aggregate analysis and a-priori segmentation. The main
benefit of Latent class models is that market segment membership and regression parameters
of each derived segment are estimated simultaneously. The Latent class model presented in
this paper uses mixtures of multivariate conditional normal distributions to analyze rating
data, where the likelihood is maximized using the EM algorithm. The application focuses on
customer preferences for investment bonds described by four attributes; currency, coupon
rate, redemption term and price. A number of demographic variables are used to generate
segments that are accessible and actionable.peer-reviewe
On the use of biased-randomized algorithms for solving non-smooth optimization problems
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines
Possibilistic and fuzzy clustering methods for robust analysis of non-precise data
This work focuses on robust clustering of data affected by imprecision. The imprecision is managed in terms of fuzzy sets. The clustering process is based on the fuzzy and possibilistic approaches. In both approaches the observations are assigned to the clusters by means of membership degrees. In fuzzy clustering the membership degrees express the degrees of sharing of the observations to the clusters. In contrast, in possibilistic clustering the membership degrees are degrees of typicality. These two sources of information are complementary because the former helps to discover the best fuzzy partition of the observations while the latter reflects how well the observations are described by the centroids and, therefore, is helpful to identify outliers. First, a fully possibilistic k-means clustering procedure is suggested. Then, in order to exploit the benefits of both the approaches, a joint possibilistic and fuzzy clustering method for fuzzy data is proposed. A selection procedure for choosing the parameters of the new clustering method is introduced. The effectiveness of the proposal is investigated by means of simulated and
real-life data
Techniques for clustering gene expression data
Many clustering techniques have been proposed for the analysis of gene expression data obtained from microarray experiments. However, choice of suitable method(s) for a given experimental dataset is not straightforward. Common approaches do not translate well and fail to take account of the data profile. This review paper surveys state of the art applications which recognises these limitations and implements procedures to overcome them. It provides a framework for the evaluation of clustering in gene expression analyses. The nature of microarray data is discussed briefly. Selected examples are presented for the clustering methods considered
When the optimal is not the best: parameter estimation in complex biological models
Background: The vast computational resources that became available during the
past decade enabled the development and simulation of increasingly complex
mathematical models of cancer growth. These models typically involve many free
parameters whose determination is a substantial obstacle to model development.
Direct measurement of biochemical parameters in vivo is often difficult and
sometimes impracticable, while fitting them under data-poor conditions may
result in biologically implausible values.
Results: We discuss different methodological approaches to estimate
parameters in complex biological models. We make use of the high computational
power of the Blue Gene technology to perform an extensive study of the
parameter space in a model of avascular tumor growth. We explicitly show that
the landscape of the cost function used to optimize the model to the data has a
very rugged surface in parameter space. This cost function has many local
minima with unrealistic solutions, including the global minimum corresponding
to the best fit.
Conclusions: The case studied in this paper shows one example in which model
parameters that optimally fit the data are not necessarily the best ones from a
biological point of view. To avoid force-fitting a model to a dataset, we
propose that the best model parameters should be found by choosing, among
suboptimal parameters, those that match criteria other than the ones used to
fit the model. We also conclude that the model, data and optimization approach
form a new complex system, and point to the need of a theory that addresses
this problem more generally
- …