1,517 research outputs found

    ESTSS at 20 years: "a phoenix gently rising from a lava flow of European trauma"

    Get PDF
    Roderick J. Ørner, who was President between 1997 and 1999, traces the phoenix-like origins of the European Society for Traumatic Stress Studies (ESTSS) from an informal business meeting called during the 1st European Conference on Traumatic Stress (ECOTS) in 1987 to its emergence into a formally constituted society. He dwells on the challenges of tendering a trauma society within a continent where trauma has been and remains endemic. ESTSS successes are noted along with a number of personal reflections on activities that give rise to concern for the present as well as its future prospects. Denial of survivors' experiences and turning away from survivors' narratives by reframing their experiences to accommodate helpers' theory-driven imperatives are viewed with alarm. Arguments are presented for making human rights, memory, and ethics core elements of a distinctive European psycho traumatology, which will secure current ESTSS viability and future integrity

    Spectral Sparsification and Regret Minimization Beyond Matrix Multiplicative Updates

    Full text link
    In this paper, we provide a novel construction of the linear-sized spectral sparsifiers of Batson, Spielman and Srivastava [BSS14]. While previous constructions required Ω(n4)\Omega(n^4) running time [BSS14, Zou12], our sparsification routine can be implemented in almost-quadratic running time O(n2+ε)O(n^{2+\varepsilon}). The fundamental conceptual novelty of our work is the leveraging of a strong connection between sparsification and a regret minimization problem over density matrices. This connection was known to provide an interpretation of the randomized sparsifiers of Spielman and Srivastava [SS11] via the application of matrix multiplicative weight updates (MWU) [CHS11, Vis14]. In this paper, we explain how matrix MWU naturally arises as an instance of the Follow-the-Regularized-Leader framework and generalize this approach to yield a larger class of updates. This new class allows us to accelerate the construction of linear-sized spectral sparsifiers, and give novel insights on the motivation behind Batson, Spielman and Srivastava [BSS14]

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Etching of random solids: hardening dynamics and self-organized fractality

    Full text link
    When a finite volume of an etching solution comes in contact with a disordered solid, a complex dynamics of the solid-solution interface develops. Since only the weak parts are corroded, the solid surface hardens progressively. If the etchant is consumed in the chemical reaction, the corrosion dynamics slows down and stops spontaneously leaving a fractal solid surface, which reveals the latent percolation criticality hidden in any random system. Here we introduce and study, both analytically and numerically, a simple model for this phenomenon. In this way we obtain a detailed description of the process in terms of percolation theory. In particular we explain the mechanism of hardening of the surface and connect it to Gradient Percolation.Comment: Latex, aipproc, 6 pages, 3 figures, Proceedings of 6th Granada Seminar on Computational Physic

    Optimization in High Dimensions via Accelerated, Parallel, and Proximal Coordinate Descent

    Get PDF
    International audience<p>We propose a new randomized coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only. Our method (APPROX) is simultaneously Accelerated, Parallel and PROXimal; this is the first time such a method is proposed. In the special case when the number of processors is equal to the number of coordinates, the method converges at the rate 2ωˉLˉR2/(k+1)22\bar{\omega}\bar{L} R^2/(k+1)^2 , where kk is the iteration counter, ωˉ\bar{\omega} is a data-weighted \emph{average} degree of separability of the loss function, Lˉ\bar{L} is the \emph{average} of Lipschitz constants associated with the coordinates and individual functions in the sum, and RR is the distance of the initial point from the minimizer. We show that the method can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent, rendering it impractical. The fact that the method depends on the average degree of separability, and not on the maximum degree, can be attributed to the use of new safe large stepsizes, leading to improved expected separable overapproximation (ESO). These are of independent interest and can be utilized in all existing parallel randomized coordinate descent algorithms based on the concept of ESO. In special cases, our method recovers several classical and recent algorithms such as simple and accelerated proximal gradient descent, as well as serial, parallel and distributed versions of randomized block coordinate descent. \new{Due of this flexibility, APPROX had been used successfully by the authors in a graduate class setting as a modern introduction to deterministic and randomized proximal gradient methods. Our bounds match or improve on the best known bounds for each of the methods APPROX specializes to. Our method has applications in a number of areas, including machine learning, submodular optimization, linear and semidefinite programming.</p

    Robustness and Generalization

    Full text link
    We derive generalization bounds for learning algorithms based on their robustness: the property that if a testing sample is "similar" to a training sample, then the testing error is close to the training error. This provides a novel approach, different from the complexity or stability arguments, to study generalization of learning algorithms. We further show that a weak notion of robustness is both sufficient and necessary for generalizability, which implies that robustness is a fundamental property for learning algorithms to work

    Radiographic Image Enhancement by Wiener Decorrelation

    Get PDF
    The primary focus of the application of image processing to radiography is the problem of segmentation. The general segmentation problem has been attacked on a broad front [1, 2], and thresholding, in particular, is a popular method [1, 3-6]. Unfortunately, geometric unsharpness destroys the crisp edges needed for unambiguous decisions, and this difficulty can be considered a problem in filtering in which the object is to devise a high-pass (sharpening) filter. This approach has been studied for more than 20 years [7-13]

    A Comparison of Machine Learning Methods for Cross-Domain Few-Shot Learning

    Get PDF
    We present an empirical evaluation of machine learning algorithms in cross-domain few-shot learning based on a fixed pre-trained feature extractor. Experiments were performed in five target domains (CropDisease, EuroSAT, Food101, ISIC and ChestX) and using two feature extractors: a ResNet10 model trained on a subset of ImageNet known as miniImageNet and a ResNet152 model trained on the ILSVRC 2012 subset of ImageNet. Commonly used machine learning algorithms including logistic regression, support vector machines, random forests, nearest neighbour classification, naïve Bayes, and linear and quadratic discriminant analysis were evaluated on the extracted feature vectors. We also evaluated classification accuracy when subjecting the feature vectors to normalisation using p-norms. Algorithms originally developed for the classification of gene expression data—the nearest shrunken centroid algorithm and LDA ensembles obtained with random projections—were also included in the experiments, in addition to a cosine similarity classifier that has recently proved popular in few-shot learning. The results enable us to identify algorithms, normalisation methods and pre-trained feature extractors that perform well in cross-domain few-shot learning. We show that the cosine similarity classifier and ℓ² -regularised 1-vs-rest logistic regression are generally the best-performing algorithms. We also show that algorithms such as LDA yield consistently higher accuracy when applied to ℓ² -normalised feature vectors. In addition, all classifiers generally perform better when extracting feature vectors using the ResNet152 model instead of the ResNet10 model
    corecore