24,633 research outputs found

    A heuristic approach for big bucket multi-level production planning problems

    Get PDF
    Multi-level production planning problems in which multiple items compete for the same resources frequently occur in practice, yet remain daunting in their difficulty to solve. In this paper, we propose a heuristic framework that can generate high quality feasible solutions quickly for various kinds of lot-sizing problems. In addition, unlike many other heuristics, it generates high quality lower bounds using strong formulations, and its simple scheme allows it to be easily implemented in the Xpress-Mosel modeling language. Extensive computational results from widely used test sets that include a variety of problems demonstrate the efficiency of the heuristic, particularly for challenging problems

    Eco-hydrology of dynamic wetlands in an Australian agricultural landscape: a whole of system approach for understanding climate change impacts

    Get PDF
    Increasing rates of water extraction and regulation of hydrologic processes, coupled with destruction of natural vegetation, pollution and climate change, are jeopardizing the future persistence of wetlands and the ecological and socio-economic functions they support. Globally, it is estimated that 50% of wetlands have been lost since the 1900’s, with agricultural changes being the main cause. In some agricultural areas of Australia, losses as high as 98% have occurred. Wetlands remaining in agricultural landscapes suffer degradation and their resilience and ability to continue functioning under hydrologic and land use changes resulting from climate change may be significantly inhibited. However, information on floodplain wetlands is sparse and knowledge of how ecological functioning and resilience may change under future land use intensification and climate change is lacking in many landscapes. These knowledge gaps pose significant problems for the future sustainable management of biodiversity and agricultural activities which rely on the important services supplied by wetland ecosystems. This research evaluates the impact that hydrology and land use has on the perennial vegetation associated with wetlands in an agricultural landscape, the Condamine Catchment of southeast Queensland, Australia. A geographical information system (GIS) was used to measure hydrological and land use variables and a bayesian modeling averaging approach was used to generate generalised linear models for vegetation response variables. Connectivity with the river and hydrological variability had consistently significant positive relationships with vegetation cover and abundance. Land use practices such as, irrigated agriculture and grazing had consistently significant negative impacts. Consequently, to understand how climate change will impact on the ecohydrological functioning of wetlands, both hydrological and land use changes need to be considered. Results from this research will now be used to investigate how resilient these systems will be to different potential scenarios of climate change

    Bounding the Probability of Error for High Precision Recognition

    Full text link
    We consider models for which it is important, early in processing, to estimate some variables with high precision, but perhaps at relatively low rates of recall. If some variables can be identified with near certainty, then they can be conditioned upon, allowing further inference to be done efficiently. Specifically, we consider optical character recognition (OCR) systems that can be bootstrapped by identifying a subset of correctly translated document words with very high precision. This "clean set" is subsequently used as document-specific training data. While many current OCR systems produce measures of confidence for the identity of each letter or word, thresholding these confidence values, even at very high values, still produces some errors. We introduce a novel technique for identifying a set of correct words with very high precision. Rather than estimating posterior probabilities, we bound the probability that any given word is incorrect under very general assumptions, using an approximate worst case analysis. As a result, the parameters of the model are nearly irrelevant, and we are able to identify a subset of words, even in noisy documents, of which we are highly confident. On our set of 10 documents, we are able to identify about 6% of the words on average without making a single error. This ability to produce word lists with very high precision allows us to use a family of models which depends upon such clean word lists
    corecore