49 research outputs found

    On the Generalized Poisson Distribution

    Full text link
    The Generalized Poisson Distribution (GPD) was introduced by Consul and Jain (1973). However, as remarked by Consul (1989), "It is very difficult to prove by direct summation that the sum of all the probabilities is unity". We give a shorter and more elegant proof based upon an application of Euler's classic difference lemma.Comment: 3 page

    Minimum L1-distance projection onto the boundary of a convex set: Simple characterization

    Full text link
    We show that the minimum distance projection in the L1-norm from an interior point onto the boundary of a convex set is achieved by a single, unidimensional projection. Application of this characterization when the convex set is a polyhedron leads to either an elementary minmax problem or a set of easily solved linear programs, depending upon whether the polyhedron is given as the intersection of a set of half spaces or as the convex hull of a set of extreme points. The outcome is an easier and more straightforward derivation of the special case results given in a recent paper by Briec.Comment: 5 page

    Pattern Reduction in Paper Cutting

    Get PDF
    A large part of the paper industry involves supplying customers with reels of specified width in specifed quantities. These 'customer reels' must be cut from a set of wider 'jumbo reels', in as economical a way as possible. The first priority is to minimize the waste, i.e. to satisfy the customer demands using as few jumbo reels as possible. This is an example of the one-dimensional cutting stock problem, which has an extensive literature. Greycon have developed cutting stock algorithms which they include in their software packages. Greycon's initial presentation to the Study Group posed several questions, which are listed below, along with (partial) answers arising from the work described in this report. (1) Given a minimum-waste solution, what is the minimum number of patterns required? It is shown in Section 2 that even when all the patterns appearing in minimum-waste solutions are known, determining the minimum number of patterns may be hard. It seems unlikely that one can guarantee to find the minimum number of patterns for large classes of realistic problems with only a few seconds on a PC available. (2) Given an n → n-1 algorithm, will it find an optimal solution to the minimum- pattern problem? There are problems for which n → n - 1 reductions are not possible although a more dramatic reduction is. (3) Is there an efficient n → n-1 algorithm? In light of Question 2, Question 3 should perhaps be rephrased as 'Is there an efficient algorithm to reduce n patterns?' However, if an algorithm guaranteed to find some reduction whenever one existed then it could be applied iteratively to minimize the number of patterns, and we have seen this cannot be done easily. (4) Are there efficient 5 → 4 and 4 → 3 algorithms? (5) Is it worthwhile seeking alternatives to greedy heuristics? In response to Questions 4 and 5, we point to the algorithm described in the report, or variants of it. Such approaches seem capable of catching many higher reductions. (6) Is there a way to find solutions with the smallest possible number of single patterns? The Study Group did not investigate methods tailored specifically to this task, but the algorithm proposed here seems to do reasonably well. It will not increase the number of singleton patterns under any circumstances, and when the number of singletons is high there will be many possible moves that tend to eliminate them. (7) Can a solution be found which reduces the number of knife changes? The algorithm will help to reduce the number of necessary knife changes because it works by bringing patterns closer together, even if this does not proceed fully to a pattern reduction. If two patterns are equal across some of the customer widths, the knives for these reels need not be changed when moving from one to the other

    High shear stress relates to intraplaque haemorrhage in asymptomatic carotid plaques

    Get PDF
    AbstractBackground and aimsCarotid artery plaques with vulnerable plaque components are related to a higher risk of cerebrovascular accidents. It is unknown which factors drive vulnerable plaque development. Shear stress, the frictional force of blood at the vessel wall, is known to influence plaque formation. We evaluated the association between shear stress and plaque components (intraplaque haemorrhage (IPH), lipid rich necrotic core (LRNC) and/or calcifications) in relatively small carotid artery plaques in asymptomatic persons.MethodsParticipants (n = 74) from the population-based Rotterdam Study, all with carotid atherosclerosis assessed on ultrasound, underwent carotid MRI. Multiple MRI sequences were used to evaluate the presence of IPH, LRNC and/or calcifications in plaques in the carotid arteries. Images were automatically segmented for lumen and outer wall to obtain a 3D reconstruction of the carotid bifurcation. These reconstructions were used to calculate minimum, mean and maximum shear stresses by applying computational fluid dynamics with subject-specific inflow conditions. Associations between shear stress measures and plaque composition were studied using generalized estimating equations analysis, adjusting for age, sex and carotid wall thickness.ResultsThe study group consisted of 93 atherosclerotic carotid arteries of 74 participants. In plaques with higher maximum shear stresses, IPH was more often present (OR per unit increase in maximum shear stress (log transformed) = 12.14; p = 0.001). Higher maximum shear stress was also significantly associated with the presence of calcifications (OR = 4.28; p = 0.015).ConclusionsHigher maximum shear stress is associated with intraplaque haemorrhage and calcifications

    Worst-case bounds for bin-packing heuristics with applications to the duality gap of the one-dimensional cutting stock problem

    Get PDF
    The thesis considers the one-dimensional cutting stock problem, the bin-packing problem, and their relationship. The duality gap of the former is investigated and a characterisation of a class of cutting stock problems with the next round-up property is given. It is shown that worst-case bounds for bin-packing heuristics can be and are best expressed in terms of the linear programming relaxation of the corresponding cutting stock problem. The concept of recurrency is introduced for a bin-packing heuristic, which allows a more natural derivation of a measure for the worst-case behaviour. The ideas are tested on some well known bin-packing heuristics and (slightly) tighter bounds for these are derived. These new bounds (in terms of the linear programming relaxation) are then used to make inferences about the duality gap of the cutting stock problem. In particular; these bounds allow à priori, problem-specific bounds. The thesis ends with conclusions and a number of suggestions to extend the analysis to higher dimensional problems
    corecore