161 research outputs found

    Structured Sparsity: Discrete and Convex approaches

    Full text link
    Compressive sensing (CS) exploits sparsity to recover sparse or compressible signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity is also used to enhance interpretability in machine learning and statistics applications: While the ambient dimension is vast in modern data analysis problems, the relevant information therein typically resides in a much lower dimensional space. However, many solutions proposed nowadays do not leverage the true underlying structure. Recent results in CS extend the simple sparsity idea to more sophisticated {\em structured} sparsity models, which describe the interdependency between the nonzero components of a signal, allowing to increase the interpretability of the results and lead to better recovery performance. In order to better understand the impact of structured sparsity, in this chapter we analyze the connections between the discrete models and their convex relaxations, highlighting their relative advantages. We start with the general group sparse model and then elaborate on two important special cases: the dispersive and the hierarchical models. For each, we present the models in their discrete nature, discuss how to solve the ensuing discrete problems and then describe convex relaxations. We also consider more general structures as defined by set functions and present their convex proxies. Further, we discuss efficient optimization solutions for structured sparsity problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure

    Foot orthoses: how much customisation is necessary?

    Get PDF
    The relative merit of customised versus prefabricated foot orthoses continues to be the subject of passionate debate among foot health professionals. Although there is currently insufficient evidence to reach definitive conclusions, a growing body of research literature suggests that prefabricated foot orthoses may produce equivalent clinical outcomes to customised foot orthoses for some conditions. Consensus guidelines for the prescription of customised foot orthoses need to be developed so that the hypothesised benefits of these devices can be thoroughly evaluated

    Reliability of the TekScan MatScan® system for the measurement of plantar forces and pressures during barefoot level walking in healthy adults

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Plantar pressure systems are increasingly being used to evaluate foot function in both research settings and in clinical practice. The purpose of this study was to investigate the reliability of the TekScan MatScan<sup>® </sup>system in assessing plantar forces and pressures during barefoot level walking.</p> <p>Methods</p> <p>Thirty participants were assessed for the reliability of measurements taken one week apart for the variables maximum force, peak pressure and average pressure. The following seven regions of the foot were investigated; heel, midfoot, 3<sup>rd</sup>-5<sup>th </sup>metatarsophalangeal joint, 2<sup>nd </sup>metatarsophalangeal joint, 1<sup>st </sup>metatarsophalangeal joint, hallux and the lesser toes.</p> <p>Results</p> <p>Reliability was assessed using both the mean and the median values of three repeated trials. The system displayed moderate to good reliability of mean and median calculations for the three analysed variables across all seven regions, as indicated by intra-class correlation coefficients ranging from 0.44 to 0.95 for the mean and 0.54 to 0.97 for the median, and coefficients of variation ranging from 5 to 20% for the mean and 3 to 23% for the median. Selecting the median value of three repeated trials yielded slightly more reliable results than the mean.</p> <p>Conclusions</p> <p>These findings indicate that the TekScan MatScan<sup>® </sup>system demonstrates generally moderate to good reliability.</p

    Greedy D-Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost

    Full text link
    This paper describes a simple greedy D-approximation algorithm for any covering problem whose objective function is submodular and non-decreasing, and whose feasible region can be expressed as the intersection of arbitrary (closed upwards) covering constraints, each of which constrains at most D variables of the problem. (A simple example is Vertex Cover, with D = 2.) The algorithm generalizes previous approximation algorithms for fundamental covering problems and online paging and caching problems

    Testing the proficiency to distinguish locations with elevated plantar pressure within and between professional groups of foot therapists

    Get PDF
    BACKGROUND: Identification of locations with elevated plantar pressures is important in daily foot care for patients with rheumatoid arthritis, metatarsalgia and diabetes. The purpose of the present study was to evaluate the proficiency of podiatrists, pedorthists and orthotists, to distinguish locations with elevated plantar pressure in patients with metatarsalgia. METHODS: Ten podiatrists, ten pedorthists and ten orthotists working in The Netherlands were asked to identify locations with excessively high plantar pressure in three patients with forefoot complaints. Therapists were instructed to examine the patients according to the methods used in their everyday clinical practice. Regions could be marked through hatching an illustration of a plantar aspect. A pressure sensitive platform was used to quantify the dynamic bare foot plantar pressures and was considered as 'Gold Standard' (GS). A pressure higher than 700 kPa was used as cut-off criterion for categorizing peak pressure into elevated or non-elevated pressure. This was done for both patient's feet and six separate forefoot regions: big toe and metatarsal one to five. Data were analysed by a mixed-model ANOVA and Generalizability Theory. RESULTS: The proportions elevated/non-elevated pressure regions, based on clinical ratings of the therapists, show important discrepancies with the criterion values obtained through quantitative plantar pressure measurement. In general, plantar pressures in the big toe region were underrated and those in the metatarsal regions were overrated. The estimated method agreement on clinical judgement of plantar pressures with the GS was below an acceptable level: i.e. all intraclass correlation coefficient's equal or smaller than .60. The inter-observer agreement for each discipline demonstrated worrisome results: all below .18. The estimated mutual agreements showed that there was virtually no mutual agreement between the professional groups studied. CONCLUSION: Identification of elevated plantar pressure through clinical evaluation is difficult, insufficient and may be potentially harmful. The process of clinical plantar pressure screening has to be re-evaluated. The results of this study point towards the merit of quantitative plantar pressure measurement for clinical practice

    A large genome-wide association study of age-related macular degeneration highlights contributions of rare and common variants.

    Get PDF
    This is the author accepted manuscript. The final version is available from Nature Publishing Group via http://dx.doi.org/10.1038/ng.3448Advanced age-related macular degeneration (AMD) is the leading cause of blindness in the elderly, with limited therapeutic options. Here we report on a study of >12 million variants, including 163,714 directly genotyped, mostly rare, protein-altering variants. Analyzing 16,144 patients and 17,832 controls, we identify 52 independently associated common and rare variants (P < 5 × 10(-8)) distributed across 34 loci. Although wet and dry AMD subtypes exhibit predominantly shared genetics, we identify the first genetic association signal specific to wet AMD, near MMP9 (difference P value = 4.1 × 10(-10)). Very rare coding variants (frequency <0.1%) in CFH, CFI and TIMP3 suggest causal roles for these genes, as does a splice variant in SLC16A8. Our results support the hypothesis that rare coding variants can pinpoint causal genes within known genetic loci and illustrate that applying the approach systematically to detect new loci requires extremely large sample sizes.We thank all participants of all the studies included for enabling this research by their participation in these studies. Computer resources for this project have been provided by the high-performance computing centers of the University of Michigan and the University of Regensburg. Group-specific acknowledgments can be found in the Supplementary Note. The Center for Inherited Diseases Research (CIDR) Program contract number is HHSN268201200008I. This and the main consortium work were predominantly funded by 1X01HG006934-01 to G.R.A. and R01 EY022310 to J.L.H

    SOME VERY EASY KNAPSACK/PARTITION PROBLEMS

    No full text
    Consider the problem of partitioning a group of b indistinguishable objects into subgroups each of size at least X and at most u. The objective is to minimize the additive separable cost of the partition, where the cost associated with a subgroup of size j is c(j). In the case that c(.) is convex, we show how to solve the problem in O(log u) steps. In the case that c(.) is concave, we solve the problem in 0(min(X, b/u, (b/k)—(b/u), u—X)) steps. This problem generalizes a lot—sizing result of Chand and has potential applications in clustering

    Single transferable vote resists strategic voting

    No full text
    "November, 1990.

    Optimal Rounding of Fractional Dynamic Flows when Transit Times are Zero

    No full text
    A transshipment problem with demands that exceed network capacity can be solved by sending flow in several waves. How can this be done in the minimum number, T , of waves, and at minimum cost, if costs are piece-wise linear convex functions of the flow? In this paper, we show that this problem can be solved using at most log T maximum flow computations and one minimum (convex) cost flow computation. When there is only one sink, this problem can be solved in the same asymptotic time as one minimum (convex) cost flow computation. This improves upon the recent algorithm in [5] which solves the quickest transshipment problem (the above mentioned problem without costs) on k terminals using k log T maximum flow computations and k minimum cost flow computations. Our solutions start with a stationary fractional flow, as described in [5], and use rounding to transform this into an integral flow. The rounding procedure takes O(n) time. The abov

    Parametric linear programming and anti-cycling pivoting rules

    No full text
    The traditional perturbation (or lexicographic) methods for resolving degeneracy in linear programming impose decision rules that eliminate ties in the simplex ratio rule and, therefore, restrict the choice of exiting basic variables. Bland&apos;s combinatorial pivoting rule also restricts the choice of exiting variables. Using ideas from parametric linear programming, we develop anticycling pivoting rules that do not limit the choice of exiting variables beyond the simplex ratio rule. That is, any variable that ties for the ratio rule can leave the basis. A similar approach gives pivoting rules for the dual simplex method that do not restrict the choice of entering variables
    corecore