179 research outputs found

    On the probabilistic min spanning tree Problem

    Get PDF
    We study a probabilistic optimization model for min spanning tree, where any vertex vi of the input-graph G(V,E) has some presence probability pi in the final instance G′ ⊂ G that will effectively be optimized. Suppose that when this “real” instance G′ becomes known, a spanning tree T, called anticipatory or a priori spanning tree, has already been computed in G and one can run a quick algorithm (quicker than one that recomputes from scratch), called modification strategy, that modifies the anticipatory tree T in order to fit G ′. The goal is to compute an anticipatory spanning tree of G such that, its modification for any G ′ ⊆ G is optimal for G ′. This is what we call probabilistic min spanning tree problem. In this paper we study complexity and approximation of probabilistic min spanning tree in complete graphs under two distinct modification strategies leading to different complexity results for the problem. For the first of the strategies developed, we also study two natural subproblems of probabilistic min spanning tree, namely, the probabilistic metric min spanning tree and the probabilistic min spanning tree 1,2 that deal with metric complete graphs and complete graphs with edge-weights either 1, or 2, respectively

    The Stochastic Container Relocation Problem

    Get PDF
    The Container Relocation Problem (CRP) is concerned with finding a sequence of moves of containers that minimizes the number of relocations needed to retrieve all containers, while respecting a given order of retrieval. However, the assumption of knowing the full retrieval order of containers is particularly unrealistic in real operations. This paper studies the stochastic CRP (SCRP), which relaxes this assumption. A new multi-stage stochastic model, called the batch model, is introduced, motivated, and compared with an existing model (the online model). The two main contributions are an optimal algorithm called Pruning-Best-First-Search (PBFS) and a randomized approximate algorithm called PBFS-Approximate with a bounded average error. Both algorithms, applicable in the batch and online models, are based on a new family of lower bounds for which we show some theoretical properties. Moreover, we introduce two new heuristics outperforming the best existing heuristics. Algorithms, bounds and heuristics are tested in an extensive computational section. Finally, based on strong computational evidence, we conjecture the optimality of the “Leveling” heuristic in a special “no information” case, where at any retrieval stage, any of the remaining containers is equally likely to be retrieved next

    Scheduling over Scenarios on Two Machines

    Get PDF
    We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all possible scenarios. Each scenario is a subset of jobs that must be executed in that scenario and all scenarios are given explicitly. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. With this research into optimization problems over scenarios, we have opened a new and rich field of interesting problems.Comment: To appear in COCOON 2014. The final publication is available at link.springer.co

    An optimally concentrated Gabor transform for localized time-frequency components

    Full text link
    Gabor analysis is one of the most common instances of time-frequency signal analysis. Choosing a suitable window for the Gabor transform of a signal is often a challenge for practical applications, in particular in audio signal processing. Many time-frequency (TF) patterns of different shapes may be present in a signal and they can not all be sparsely represented in the same spectrogram. We propose several algorithms, which provide optimal windows for a user-selected TF pattern with respect to different concentration criteria. We base our optimization algorithm on lpl^p-norms as measure of TF spreading. For a given number of sampling points in the TF plane we also propose optimal lattices to be used with the obtained windows. We illustrate the potentiality of the method on selected numerical examples

    Shaping Biological Knowledge: Applications in Proteomics

    Get PDF
    The central dogma of molecular biology has provided a meaningful principle for data integration in the field of genomics. In this context, integration reflects the known transitions from a chromosome to a protein sequence: transcription, intron splicing, exon assembly and translation. There is no such clear principle for integrating proteomics data, since the laws governing protein folding and interactivity are not quite understood. In our effort to bring together independent pieces of information relative to proteins in a biologically meaningful way, we assess the bias of bioinformatics resources and consequent approximations in the framework of small-scale studies. We analyse proteomics data while following both a data-driven (focus on proteins smaller than 10 kDa) and a hypothesis-driven (focus on whole bacterial proteomes) approach. These applications are potentially the source of specialized complements to classical biological ontologies

    Neo: an object model for handling electrophysiology data in multiple formats

    Get PDF
    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.EC/FP7/269921/EU/Brain-inspired multiscale computation in neuromorphic hybrid systems/BrainScaleSDFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen SystemenBMBF, 01GQ1302, Nationaler Neuroinformatik Knote

    Partitioning of Mg, Sr, Ba and U into a subaqueous calcite speleothem

    Get PDF
    The trace-element geochemistry of speleothems is becoming increasingly used for reconstructing palaeoclimate, with a particular emphasis on elements whose concentrations vary according to hydrological conditions at the cave site (e.g. Mg, Sr, Ba and U). An important step in interpreting trace-element abundances is understanding the underlying processes of their incorporation. This includes quantifying the fractionation between the solution and speleothem carbonate via partition coefficients (where the partitioning (D) of element X (DX) is the molar ratio [X/Ca] in the calcite divided by the molar ratio [X/Ca] in the parent water) and evaluating the degree of spatial variability across time-constant speleothem layers. Previous studies of how these elements are incorporated into speleothems have focused primarily on stalagmites and their source waters in natural cave settings, or have used synthetic solutions under cave-analogue laboratory conditions to produce similar dripstones. However, dripstones are not the only speleothem types capable of yielding useful palaeoclimate information. In this study, we investigate the incorporation of Mg, Sr, Ba and U into a subaqueous calcite speleothem (CD3) growing in a natural cave pool in Italy. Pool-water measurements extending back 15 years reveal a remarkably stable geochemical environment owing to the deep cave setting, enabling the calculation of precise solution [X/Ca]. We determine the trace element variability of ‘modern’ subaqueous calcite from a drill core taken through CD3 to derive DMg, DSr, DBa and DU then compare these with published cave, cave-analogue and seawater-analogue studies. The DMg for CD3 is anomalously high (0.042 ± 0.002) compared to previous estimates at similar temperatures (∼8 °C). The DSr (0.100 ± 0.007) is similar to previously reported values, but data from this study as well as those from Tremaine and Froelich (2013) and Day and Henderson (2013) suggest that [Na/Sr] might play an important role in Sr incorporation through the potential for Na to outcompete Sr for calcite non-lattice sites. DBa in CD3 (0.086 ± 0.008) is similar to values derived by Day and Henderson (2013) under cave-analogue conditions, whilst DU (0.013 ± 0.002) is almost an order of magnitude lower, possibly due to the unusually slow speleothem growth rates (<1 μm a−1), which could expose the crystal surfaces to leaching of uranyl carbonate. Finally, laser-ablation ICP-MS analysis of the upper 7 μm of CD3, regarded as ‘modern’ for the purposes of this study, reveals considerable heterogeneity, particularly for Sr, Ba and U, which is potentially indicative of compositional zoning. This reinforces the need to conduct 2D mapping and/or multiple laser passes to capture the range of time-equivalent elemental variations prior to palaeoclimate interpretation

    Large scale stochastic inventory routing problems with split delivery and service level constraints

    Get PDF
    A stochastic inventory routing problem (SIRP) is typically the combination of stochastic inventory control problems and NP-hard vehicle routing problems, which determines delivery volumes to the customers that the depot serves in each period, and vehicle routes to deliver the volumes. This paper aims to solve a large scale multi-period SIRP with split delivery (SIRPSD) where a customer’s delivery in each period can be split and satisfied by multiple vehicle routes if necessary. This paper considers SIRPSD under the multi-criteria of the total inventory and transportation costs, and the service levels of customers. The total inventory and transportation cost is considered as the objective of the problem to minimize, while the service levels of the warehouses and the customers are satisfied by some imposed constraints and can be adjusted according to practical requests. In order to tackle the SIRPSD with notorious computational complexity, we first propose an approximate model, which significantly reduces the number of decision variables compared to its corresponding exact model. We then develop a hybrid approach that combines the linearization of nonlinear constraints, the decomposition of the model into sub-models with Lagrangian relaxation, and a partial linearization approach for a sub model. A near optimal solution of the model found by the approach is used to construct a near optimal solution of the SIRPSD. Randomly generated instances of the problem with up to 200 customers and 5 periods and about 400 thousands decision variables where half of them are integer are examined by numerical experiments. Our approach can obtain high quality near optimal solutions within a reasonable amount of computation time on an ordinary PC

    Using Simulation to Assess the Opportunities of Dynamic Waste Collection

    Get PDF
    In this paper, we illustrate the use of discrete event simulation to evaluate how dynamic planning methodologies can be best applied for the collection of waste from underground containers. We present a case study that took place at the waste collection company Twente Milieu, located in The Netherlands. Even though the underground containers are already equipped with motion sensors, the planning of container emptying’s is still based on static cyclic schedules. It is expected that the use of a dynamic planning methodology, that employs sensor information, will result in a more efficient collection process with respect to customer satisfaction, profits, and CO2 emissions. In this research we use simulation to (i) evaluate the current planning methodology, (ii) evaluate various dynamic planning possibilities, (iii) quantify the benefits of switching to a dynamic collection process, and (iv) quantify the benefits of investing in fill‐level sensors. After simulating all scenarios, we conclude that major improvements can be achieved, both with respect to logistical costs as well as customer satisfaction
    corecore