308,157 research outputs found

    Bilateral triad of persistent median artery, a bifid median nerve and high origin of its palmar cutaneous branch. A case report and clinical implications

    Get PDF
    We report the association of a persistent median artery, a bifid median nerve with a rare very high origin palmar cutaneous branch, presenting bilaterally in the upper limb of a 75-year-old female cadaver. The persistent median nerve with a bifid median nerve has been reported in patients presenting with carpal tunnel syndrome. Reports of this neurovascular anomaly occurring in association with a high origin palmar cutaneous branch however, are few. This subset of patients is at risk of inadvertent nerve transection during forearm and wrist surgery. Pre-operative magnetic resonance imaging (MRI) and high resolution sonography (HRS) can be used to screen this triad. MRI can reveal if the patient’s disability is associated with a persistent median nerve, a bifid median nerve. HRS can help identify a palmar cutaneous branch of the median nerve that arises in an unexpected high forearm location. Such knowledge will help surgeons in selecting the most appropriate surgical procedure, and help avoid inadvertent injury to cutaneous nerves arising in unexpected locations. In patients presenting with a bilateral carpal tunnel syndrome, hand surgeons should consider very high on the list of differential diagnosis a persistent median artery with a concomitant bifid median nerve, with a high suspicion of a possible bilateral occurrence of a bilaterally high arising palmar cutaneous branch of the median nerve. © 2016, Universidad de la Frontera. All rights reserved

    On smoothed analysis of quicksort and Hoare's find

    Get PDF
    We provide a smoothed analysis of Hoare's find algorithm, and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect or one-sided quicksort - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare’s find needs is Theta(n^2), the average-case number is Theta(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare's find needs Theta(n/(d+1) sqrt(n/d) + n) comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is Omega((1−p)n/p log n). Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare’s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule

    Feature Selection in k-Median Clustering

    Get PDF
    An e ective method for selecting features in clustering unlabeled data is proposed based on changing the objective function of the standard k-median clustering algorithm. The change consists of perturbing the objective function by a term that drives the medians of each of the k clusters toward the (shifted) global median of zero for the entire dataset. As the perturbation parameter is increased, more and more features are driven automatically toward the global zero median and are eliminated from the problem until one last feature remains. An error curve for unlabeled data clustering as a function of the number of features used gives reducedfeature clustering error relative to the \gold standard" of the full-feature clustering. This clustering error curve parallels a classi cation error curve based on real data labels. This justi es the utility of the former error curve for unlabeled data as a means of choosing an appropriate number of reduced features in order to achieve a correctness comparable to that obtained by the full set of original features. For example, on the 3-class Wine dataset, clustering with 4 selected input space features is comparable to within 4% to clustering using the original 13 features of the problem

    Sorting and Selecting in Rounds

    Get PDF
    We present upper bounds for sorting and selecting the median in a fixed number of rounds. These bounds match the known lower bounds to within logarithmic factors. They also have the merit of being “explicit modulo expansion”; that is, probabilistic arguments are used only to obtain expanding graphs, and when explicit constructions for such graphs are found, explicit algorithms for sorting and selecting will follow. Using the best currently available explicit constructions for expanding graphs, we present the best currently known explicit algorithms for sorting and selecting in rounds

    Worst-Case Efficient Sorting with QuickMergesort

    Full text link
    The two most prominent solutions for the sorting problem are Quicksort and Mergesort. While Quicksort is very fast on average, Mergesort additionally gives worst-case guarantees, but needs extra space for a linear number of elements. Worst-case efficient in-place sorting, however, remains a challenge: the standard solution, Heapsort, suffers from a bad cache behavior and is also not overly fast for in-cache instances. In this work we present median-of-medians QuickMergesort (MoMQuickMergesort), a new variant of QuickMergesort, which combines Quicksort with Mergesort allowing the latter to be implemented in place. Our new variant applies the median-of-medians algorithm for selecting pivots in order to circumvent the quadratic worst case. Indeed, we show that it uses at most nlogn+1.6nn \log n + 1.6n comparisons for nn large enough. We experimentally confirm the theoretical estimates and show that the new algorithm outperforms Heapsort by far and is only around 10% slower than Introsort (std::sort implementation of stdlibc++), which has a rather poor guarantee for the worst case. We also simulate the worst case, which is only around 10% slower than the average case. In particular, the new algorithm is a natural candidate to replace Heapsort as a worst-case stopper in Introsort
    corecore