1,000 research outputs found

    Parameterized analysis of complexity

    Get PDF

    Improved Approximation Algorithms for Segment Minimization in Intensity Modulated Radiation Therapy

    Full text link
    he segment minimization problem consists of finding the smallest set of integer matrices that sum to a given intensity matrix, such that each summand has only one non-zero value, and the non-zeroes in each row are consecutive. This has direct applications in intensity-modulated radiation therapy, an effective form of cancer treatment. We develop three approximation algorithms for matrices with arbitrarily many rows. Our first two algorithms improve the approximation factor from the previous best of 1+log2h1+\log_2 h to (roughly) 3/2(1+log3h)3/2 \cdot (1+\log_3 h) and 11/6(1+log4h)11/6\cdot(1+\log_4{h}), respectively, where hh is the largest entry in the intensity matrix. We illustrate the limitations of the specific approach used to obtain these two algorithms by proving a lower bound of (2b2)blogbh+1b\frac{(2b-2)}{b}\cdot\log_b{h} + \frac{1}{b} on the approximation guarantee. Our third algorithm improves the approximation factor from 2(logD+1)2 \cdot (\log D+1) to 24/13(logD+1)24/13 \cdot (\log D+1), where DD is (roughly) the largest difference between consecutive elements of a row of the intensity matrix. Finally, experimentation with these algorithms shows that they perform well with respect to the optimum and outperform other approximation algorithms on 77% of the 122 test cases we consider, which include both real world and synthetic data.Comment: 18 page

    Ranking with Submodular Valuations

    Full text link
    We study the problem of ranking with submodular valuations. An instance of this problem consists of a ground set [m][m], and a collection of nn monotone submodular set functions f1,,fnf^1, \ldots, f^n, where each fi:2[m]R+f^i: 2^{[m]} \to R_+. An additional ingredient of the input is a weight vector wR+nw \in R_+^n. The objective is to find a linear ordering of the ground set elements that minimizes the weighted cover time of the functions. The cover time of a function is the minimal number of elements in the prefix of the linear ordering that form a set whose corresponding function value is greater than a unit threshold value. Our main contribution is an O(ln(1/ϵ))O(\ln(1 / \epsilon))-approximation algorithm for the problem, where ϵ\epsilon is the smallest non-zero marginal value that any function may gain from some element. Our algorithm orders the elements using an adaptive residual updates scheme, which may be of independent interest. We also prove that the problem is Ω(ln(1/ϵ))\Omega(\ln(1 / \epsilon))-hard to approximate, unless P = NP. This implies that the outcome of our algorithm is optimal up to constant factors.Comment: 16 pages, 3 figure

    An Improved Fixed-Parameter Algorithm for One-Page Crossing Minimization

    Get PDF
    Book embedding is one of the most well-known graph drawing models and is extensively studied in the literature. The special case where the number of pages is one is of particular interest: an embedding in this case has a natural circular representation useful for visualization and graphs that can be embedded in one page without crossings form an important graph class, namely that of outerplanar graphs. In this paper, we consider the problem of minimizing the number of crossings in a one-page book embedding, which we call one-page crossing minimization. Here, we are given a graph G with n vertices together with a non-negative integer k and are asked whether G can be embedded into a single page with at most k crossings. Bannister and Eppstein (GD 2014) showed that this problem is fixed-parameter tractable. Their algorithm is derived through the application of Courcelle\u27s theorem (on graph properties definable in the monadic second-order logic of graphs) and runs in f(L)n time, where L = 2^{O(k^2)} is the length of the formula defining the property that the one-page crossing number is at most k and f is a computable function without any known upper bound expressible as an elementary function. We give an explicit dynamic programming algorithm with a drastically improved running time of 2^{O(k log k)}n

    VIVA: An Online Algorithm for Piecewise Curve Estimation Using ℓ\u3csup\u3e0\u3c/sup\u3e Norm Regularization

    Get PDF
    Many processes deal with piecewise input functions, which occur naturally as a result of digital commands, user interfaces requiring a confirmation action, or discrete-time sampling. Examples include the assembly of protein polymers and hourly adjustments to the infusion rate of IV fluids during treatment of burn victims. Estimation of the input is straightforward regression when the observer has access to the timing information. More work is needed if the input can change at unknown times. Successful recovery of the change timing is largely dependent on the choice of cost function minimized during parameter estimation. Optimal estimation of a piecewise input will often proceed by minimization of a cost function which includes an estimation error term (most commonly mean square error) and the number (cardinality) of input changes (number of commands). Because the cardinality (ℓ0 norm) is not convex, the ℓ2 norm (quadratic smoothing) and ℓ1 norm (total variation minimization) are often substituted because they permit the use of convex optimization algorithms. However, these penalize the magnitude of input changes and therefore bias the piecewise estimates. Another disadvantage is that global optimization methods must be run after the end of data collection. One approach to unbiasing the piecewise parameter fits would include application of total variation minimization to recover timing, followed by piecewise parameter fitting. Another method is presented herein: a dynamic programming approach which iteratively develops populations of candidate estimates of increasing length, pruning those proven to be dominated. Because the usage of input data is entirely causal, the algorithm recovers timing and parameter values online. A functional definition of the algorithm, which is an extension of Viterbi decoding and integrates the pruning concept from branch-and-bound, is presented. Modifications are introduced to improve handling of non-uniform sampling, non-uniform confidence, and burst errors. Performance tests using synthesized data sets as well as volume data from a research system recording fluid infusions show five-fold (piecewise-constant data) and 20-fold (piecewise-linear data) reduction in error compared to total variation minimization, along with improved sparsity and reduced sensitivity to the regularization parameter. Algorithmic complexity and delay are also considered

    State-space solutions to the dynamic magnetoencephalography inverse problem using high performance computing

    Get PDF
    Determining the magnitude and location of neural sources within the brain that are responsible for generating magnetoencephalography (MEG) signals measured on the surface of the head is a challenging problem in functional neuroimaging. The number of potential sources within the brain exceeds by an order of magnitude the number of recording sites. As a consequence, the estimates for the magnitude and location of the neural sources will be ill-conditioned because of the underdetermined nature of the problem. One well-known technique designed to address this imbalance is the minimum norm estimator (MNE). This approach imposes an L2L^2 regularization constraint that serves to stabilize and condition the source parameter estimates. However, these classes of regularizer are static in time and do not consider the temporal constraints inherent to the biophysics of the MEG experiment. In this paper we propose a dynamic state-space model that accounts for both spatial and temporal correlations within and across candidate intracortical sources. In our model, the observation model is derived from the steady-state solution to Maxwell's equations while the latent model representing neural dynamics is given by a random walk process.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS483 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distance Oracles for Time-Dependent Networks

    Full text link
    We present the first approximate distance oracle for sparse directed networks with time-dependent arc-travel-times determined by continuous, piecewise linear, positive functions possessing the FIFO property. Our approach precomputes (1+ϵ)(1+\epsilon)-approximate distance summaries from selected landmark vertices to all other vertices in the network. Our oracle uses subquadratic space and time preprocessing, and provides two sublinear-time query algorithms that deliver constant and (1+σ)(1+\sigma)-approximate shortest-travel-times, respectively, for arbitrary origin-destination pairs in the network, for any constant σ>ϵ\sigma > \epsilon. Our oracle is based only on the sparsity of the network, along with two quite natural assumptions about travel-time functions which allow the smooth transition towards asymmetric and time-dependent distance metrics.Comment: A preliminary version appeared as Technical Report ECOMPASS-TR-025 of EU funded research project eCOMPASS (http://www.ecompass-project.eu/). An extended abstract also appeared in the 41st International Colloquium on Automata, Languages, and Programming (ICALP 2014, track-A
    corecore