2,047 research outputs found
On Purchase Timing Models in Marketing
In this paper we consider stochastic purchase timing models used in marketing for low-involvement products and show that important characteristics of those models are easy to compute. As such these calculations are based on an elementary probabilistic argument and cover not only the well-known condensed negative binomial model but also stochastic purchase timing models with other interarrival and mixing distributions.marketing;purchase timing model
Faint counts as a function of morphological type in a hierarchical merger model
The unprecedented resolution of the refurbished Wide Field and Planetary
Camera 2 (WFPC2) on the Hubble Space Telescope (HST) has led to major advances
in our understanding of galaxy formation. The high image quality in the Medium
Deep Survey and Hubble Deep Field has made it possible, for the first time, to
classify faint distant galaxies according to morphological type. These
observations have revealed a large population of galaxies classed as irregulars
or which show signs of recent merger activity. Their abundance rises steeply
with apparent magnitude, providing a likely explanation for the large number of
blue galaxies seen at faint magnitudes. We demonstrate that such a population
arises naturally in a model in which structure forms hierarchically and which
is dynamically dominated by cold dark matter. The number counts of irregular,
spiral and elliptical galaxies as a function of magnitude seen in the HST data
are well reproduced in this model.We present detailed predictions for the
outcome of spectroscopic follow-up observations of the HST surveys. By
measuring the redshift distributions of faint galaxies of different
morphological types, these programmes will provide a test of the hierarchical
galaxy formation paradigm and might distinguish between models with different
cosmological parameters.Comment: 5 pages, 3 postscript figures included. To be published as a Letter
in Monthly Notices of the RAS. Postscript version available at
http://star-www.dur.ac.uk/~cmb/counts.htm
Constraining Omega using weak gravitational lensing by clusters
The morphology of galaxy clusters reflects the epoch at which they formed and
hence depends on the value of the mean cosmological density, Omega. Recent
studies have shown that the distribution of dark matter in clusters can be
mapped from analysis of the small distortions in the shapes of background
galaxies induced by weak gravitational lensing in the cluster potential. We
construct new statistics to quantify the morphology of clusters which are
insensitive to limitations in the mass reconstruction procedure. By simulating
weak gravitational lensing in artificial clusters grown in numerical
simulations of the formation of clusters in three different cosmologies, we
obtain distributions of a quadrupole statistic which measures global deviations
from spherical symmetry in a cluster. These distributions are very sensitive to
the value of Omega_0 and, as a result, lensing observations of a small number
of clusters should be sufficient to place broad constraints on Omega_{0} and
certainly to distinguish between the extreme values of 0.2 and 1.Comment: Submitted to MNRAS. Compressed postscript also available at
ftp://star-ftp.dur.ac.uk/pub/preprints/wcf2.ps.g
A deep cut ellipsoid algorithm for convex programming
This paper proposes a deep cut version of the ellipsoid algorithm for solving a general class of continuous convex programming problems. In each step the algorithm does not require more computational effort to construct these deep cuts than its corresponding central cut version. Rules that prevent some of the numerical instabilities and theoretical drawbacks usually associated with the algorithm are also provided. Moreover, for a large class of convex programs a simple proof of its rate of convergence is given and the relation with previously known results is discussed. Finally some computational results of the deep and central cut version of the algorithm applied to a minââŹâmax stochastic queue location problem are reported.location theory;convex programming;deep cut ellipsoid algorithm;minââŹâmax programming;rate of convergence
General models in min-max planar location
This paper studies the problem of deciding whether the present iteration point of some algorithm applied to a planar singlefacility min-max location problem, with distances measured by either anl p -norm or a polyhedral gauge, is optimal or not. It turns out that this problem is equivalent to the decision problem of whether 0 belongs to the convex hull of either a finite number of points in the plane or a finite number of differentl q -circles . Although both membership problems are theoretically solvable in polynomial time, the last problem is more difficult to solve in practice than the first one. Moreover, the second problem is solvable only in the weak sense, i.e., up to a predetermined accuracy. Unfortunately, these polynomial-time algorithms are not practical. Although this is a negative result, it is possible to construct an efficient and extremely simple linear-time algorithm to solve the first problem. Moreover, this paper describes an implementable procedure to reduce the second decision problem to the first with any desired precision. Finally, in the last section, some computational results for these algorithms are reported.optimality conditions;continuous location theory;computational geometry;convex hull;Newton-Raphson method
A new algorithm for generalized fractional programs
A new dual problem for convex generalized fractional programs with no duality gap is presented and it is shown how this dual problem can be efficiently solved using a parametric approach. The resulting algorithm can be seen as ââŹĹdualâ⏠to the Dinkelbach-type algorithm for generalized fractional programs since it approximates the optimal objective value of the dual (primal) problem from below. Convergence results for this algorithm are derived and an easy condition to achieve superlinear convergence is also established. Moreover, under some additional assumptions the algorithm also recovers at the same time an optimal solution of the primal problem. We also consider a variant of this new algorithm, based on scaling the ââŹĹdualâ⏠parametric function. The numerical results, in case of quadratic-linear ratios and linear constraints, show that the performance of the new algorithm and its scaled version is superior to that of the Dinkelbach-type algorithms. From the computational results it also appears that contrary to the primal approach, the ââŹĹdualâ⏠approach is less influenced by scaling.fractional programming;generalized fractional programming;Dinkelbach-type algorithms;quasiconvexity;Karush-Kuhn-Tucker conditions;duality
A note on a stochastic location problem
In this note we give a short and easy proof of the equivalence of Hakimi's one-median problem and the k-server-facility-loss median problem as discussed by Chiu and Larson in Computer and Operation Research. The proof makes only use of a stochastic monotonicity result for birth and death processes and the insensitivity of the M/G/k/k loss model.Hakimi median;stochastic location;stochastic monotonicity
Recursive Approximation of the High Dimensional max Function
An alternative smoothing method for the high dimensional max functionhas been studied. The proposed method is a recursive extension of thetwo dimensional smoothing functions. In order to analyze the proposedmethod, a theoretical framework related to smoothing methods has beendiscussed. Moreover, we support our discussion by considering someapplication areas. This is followed by a comparison with analternative well-known smoothing method.n dimensional max function;recursive approximation;smoothing methods;vertical linear complementarity (VLCP)
- âŚ