174 research outputs found

    Maximizing Revenues for Online-Dial-a-Ride

    Full text link
    In the classic Dial-a-Ride Problem, a server travels in some metric space to serve requests for rides. Each request has a source, destination, and release time. We study a variation of this problem where each request also has a revenue that is earned if the request is satisfied. The goal is to serve requests within a time limit such that the total revenue is maximized. We first prove that the version of this problem where edges in the input graph have varying weights is NP-complete. We also prove that no algorithm can be competitive for this problem. We therefore consider the version where edges in the graph have unit weight and develop a 2-competitive algorithm for this problem

    On the probabilistic min spanning tree Problem

    Get PDF
    We study a probabilistic optimization model for min spanning tree, where any vertex vi of the input-graph G(V,E) has some presence probability pi in the final instance G′ ⊂ G that will effectively be optimized. Suppose that when this “real” instance G′ becomes known, a spanning tree T, called anticipatory or a priori spanning tree, has already been computed in G and one can run a quick algorithm (quicker than one that recomputes from scratch), called modification strategy, that modifies the anticipatory tree T in order to fit G ′. The goal is to compute an anticipatory spanning tree of G such that, its modification for any G ′ ⊆ G is optimal for G ′. This is what we call probabilistic min spanning tree problem. In this paper we study complexity and approximation of probabilistic min spanning tree in complete graphs under two distinct modification strategies leading to different complexity results for the problem. For the first of the strategies developed, we also study two natural subproblems of probabilistic min spanning tree, namely, the probabilistic metric min spanning tree and the probabilistic min spanning tree 1,2 that deal with metric complete graphs and complete graphs with edge-weights either 1, or 2, respectively

    Scheduling over Scenarios on Two Machines

    Get PDF
    We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all possible scenarios. Each scenario is a subset of jobs that must be executed in that scenario and all scenarios are given explicitly. The two objectives that we consider are minimizing the maximum makespan over all scenarios and minimizing the sum of the makespans of all scenarios. For both versions, we give several approximation algorithms and lower bounds on their approximability. With this research into optimization problems over scenarios, we have opened a new and rich field of interesting problems.Comment: To appear in COCOON 2014. The final publication is available at link.springer.co

    Relaxing the Irrevocability Requirement for Online Graph Algorithms

    Get PDF
    Online graph problems are considered in models where the irrevocability requirement is relaxed. Motivated by practical examples where, for example, there is a cost associated with building a facility and no extra cost associated with doing it later, we consider the Late Accept model, where a request can be accepted at a later point, but any acceptance is irrevocable. Similarly, we also consider a Late Reject model, where an accepted request can later be rejected, but any rejection is irrevocable (this is sometimes called preemption). Finally, we consider the Late Accept/Reject model, where late accepts and rejects are both allowed, but any late reject is irrevocable. For Independent Set, the Late Accept/Reject model is necessary to obtain a constant competitive ratio, but for Vertex Cover the Late Accept model is sufficient and for Minimum Spanning Forest the Late Reject model is sufficient. The Matching problem has a competitive ratio of 2, but in the Late Accept/Reject model, its competitive ratio is 3/2

    An optimally concentrated Gabor transform for localized time-frequency components

    Full text link
    Gabor analysis is one of the most common instances of time-frequency signal analysis. Choosing a suitable window for the Gabor transform of a signal is often a challenge for practical applications, in particular in audio signal processing. Many time-frequency (TF) patterns of different shapes may be present in a signal and they can not all be sparsely represented in the same spectrogram. We propose several algorithms, which provide optimal windows for a user-selected TF pattern with respect to different concentration criteria. We base our optimization algorithm on lpl^p-norms as measure of TF spreading. For a given number of sampling points in the TF plane we also propose optimal lattices to be used with the obtained windows. We illustrate the potentiality of the method on selected numerical examples

    Shaping Biological Knowledge: Applications in Proteomics

    Get PDF
    The central dogma of molecular biology has provided a meaningful principle for data integration in the field of genomics. In this context, integration reflects the known transitions from a chromosome to a protein sequence: transcription, intron splicing, exon assembly and translation. There is no such clear principle for integrating proteomics data, since the laws governing protein folding and interactivity are not quite understood. In our effort to bring together independent pieces of information relative to proteins in a biologically meaningful way, we assess the bias of bioinformatics resources and consequent approximations in the framework of small-scale studies. We analyse proteomics data while following both a data-driven (focus on proteins smaller than 10 kDa) and a hypothesis-driven (focus on whole bacterial proteomes) approach. These applications are potentially the source of specialized complements to classical biological ontologies

    Neo: an object model for handling electrophysiology data in multiple formats

    Get PDF
    Neuroscientists use many different software tools to acquire, analyze and visualize electrophysiological signals. However, incompatible data models and file formats make it difficult to exchange data between these tools. This reduces scientific productivity, renders potentially useful analysis methods inaccessible and impedes collaboration between labs. A common representation of the core data would improve interoperability and facilitate data-sharing. To that end, we propose here a language-independent object model, named “Neo,” suitable for representing data acquired from electroencephalographic, intracellular, or extracellular recordings, or generated from simulations. As a concrete instantiation of this object model we have developed an open source implementation in the Python programming language. In addition to representing electrophysiology data in memory for the purposes of analysis and visualization, the Python implementation provides a set of input/output (IO) modules for reading/writing the data from/to a variety of commonly used file formats. Support is included for formats produced by most of the major manufacturers of electrophysiology recording equipment and also for more generic formats such as MATLAB. Data representation and data analysis are conceptually separate: it is easier to write robust analysis code if it is focused on analysis and relies on an underlying package to handle data representation. For that reason, and also to be as lightweight as possible, the Neo object model and the associated Python package are deliberately limited to representation of data, with no functions for data analysis or visualization. Software for neurophysiology data analysis and visualization built on top of Neo automatically gains the benefits of interoperability, easier data sharing and automatic format conversion; there is already a burgeoning ecosystem of such tools. We intend that Neo should become the standard basis for Python tools in neurophysiology.EC/FP7/269921/EU/Brain-inspired multiscale computation in neuromorphic hybrid systems/BrainScaleSDFG, 103586207, GRK 1589: Verarbeitung sensorischer Informationen in neuronalen SystemenBMBF, 01GQ1302, Nationaler Neuroinformatik Knote

    Partitioning of Mg, Sr, Ba and U into a subaqueous calcite speleothem

    Get PDF
    The trace-element geochemistry of speleothems is becoming increasingly used for reconstructing palaeoclimate, with a particular emphasis on elements whose concentrations vary according to hydrological conditions at the cave site (e.g. Mg, Sr, Ba and U). An important step in interpreting trace-element abundances is understanding the underlying processes of their incorporation. This includes quantifying the fractionation between the solution and speleothem carbonate via partition coefficients (where the partitioning (D) of element X (DX) is the molar ratio [X/Ca] in the calcite divided by the molar ratio [X/Ca] in the parent water) and evaluating the degree of spatial variability across time-constant speleothem layers. Previous studies of how these elements are incorporated into speleothems have focused primarily on stalagmites and their source waters in natural cave settings, or have used synthetic solutions under cave-analogue laboratory conditions to produce similar dripstones. However, dripstones are not the only speleothem types capable of yielding useful palaeoclimate information. In this study, we investigate the incorporation of Mg, Sr, Ba and U into a subaqueous calcite speleothem (CD3) growing in a natural cave pool in Italy. Pool-water measurements extending back 15 years reveal a remarkably stable geochemical environment owing to the deep cave setting, enabling the calculation of precise solution [X/Ca]. We determine the trace element variability of ‘modern’ subaqueous calcite from a drill core taken through CD3 to derive DMg, DSr, DBa and DU then compare these with published cave, cave-analogue and seawater-analogue studies. The DMg for CD3 is anomalously high (0.042 ± 0.002) compared to previous estimates at similar temperatures (∼8 °C). The DSr (0.100 ± 0.007) is similar to previously reported values, but data from this study as well as those from Tremaine and Froelich (2013) and Day and Henderson (2013) suggest that [Na/Sr] might play an important role in Sr incorporation through the potential for Na to outcompete Sr for calcite non-lattice sites. DBa in CD3 (0.086 ± 0.008) is similar to values derived by Day and Henderson (2013) under cave-analogue conditions, whilst DU (0.013 ± 0.002) is almost an order of magnitude lower, possibly due to the unusually slow speleothem growth rates (<1 μm a−1), which could expose the crystal surfaces to leaching of uranyl carbonate. Finally, laser-ablation ICP-MS analysis of the upper 7 μm of CD3, regarded as ‘modern’ for the purposes of this study, reveals considerable heterogeneity, particularly for Sr, Ba and U, which is potentially indicative of compositional zoning. This reinforces the need to conduct 2D mapping and/or multiple laser passes to capture the range of time-equivalent elemental variations prior to palaeoclimate interpretation
    corecore