497 research outputs found

    Therapists’ experiences of ‘internet exposure’ in the therapeutic relationship: an interpretative phenomenological analysis

    Get PDF
    The aim of this research was to investigate the effect on the therapist and the therapeutic relationship when clients obtained personal information about their therapist online and went on to disclose that information in a session. As social media has grown in popularity, many people have become accustomed to publishing information about themselves and others on the web. In this context, it is harder than ever for therapists to keep their personal and professional lives separate. Through understanding this phenomenon further, the research aimed to provide practitioners with recommendations that would inform their practice. Interpretative Phenomenological Analysis (IPA) was the chosen methodology because it offers a framework for exploring individuals’ lived experiences and therefore provides an in-depth and rich understanding of the phenomenon being studied. Semi-structured, one-to-one interviews were conducted with six participants. Each participant was interviewed twice with the second interview taking place eight weeks after the first. The second interview provided an opportunity to capture further reflections that may have emerged after interview one. Participants were qualified counsellors, psychotherapists and one psychologist who had had the experience of a client disclosing information about them that was obtained online – information that the therapist would not have willingly revealed to the client. Four superordinate themes emerged during analysis: (1) Tension in peacetime (2) Breach of defences (3) Weapons (4) The aftermath: renegotiation with client and self. The war metaphor represents the struggle experienced by the participants and follows the journey from pre- to post-client disclosure. The analysis uncovered feelings of exposure, vulnerability and shame for the participants. These feelings made it difficult to navigate the therapeutic relationship, which was immeasurably changed in both positive and negative ways. The main “weapon” therapists used to defend themselves and the relationship was avoidance of the issue. This study therefore calls for more research and training on the phenomenon, in order to supply practitioners with the necessary tools for navigating this complex terrain

    On Model-Based RIP-1 Matrices

    Get PDF
    The Restricted Isometry Property (RIP) is a fundamental property of a matrix enabling sparse recovery. Informally, an m x n matrix satisfies RIP of order k in the l_p norm if ||Ax||_p \approx ||x||_p for any vector x that is k-sparse, i.e., that has at most k non-zeros. The minimal number of rows m necessary for the property to hold has been extensively investigated, and tight bounds are known. Motivated by signal processing models, a recent work of Baraniuk et al has generalized this notion to the case where the support of x must belong to a given model, i.e., a given family of supports. This more general notion is much less understood, especially for norms other than l_2. In this paper we present tight bounds for the model-based RIP property in the l_1 norm. Our bounds hold for the two most frequently investigated models: tree-sparsity and block-sparsity. We also show implications of our results to sparse recovery problems.Comment: Version 3 corrects a few errors present in the earlier version. In particular, it states and proves correct upper and lower bounds for the number of rows in RIP-1 matrices for the block-sparse model. The bounds are of the form k log_b n, not k log_k n as stated in the earlier versio

    Sparsity and Incoherence in Compressive Sampling

    Get PDF
    We consider the problem of reconstructing a sparse signal x0Rnx^0\in\R^n from a limited number of linear measurements. Given mm randomly selected samples of Ux0U x^0, where UU is an orthonormal matrix, we show that 1\ell_1 minimization recovers x0x^0 exactly when the number of measurements exceeds mConstμ2(U)Slogn, m\geq \mathrm{Const}\cdot\mu^2(U)\cdot S\cdot\log n, where SS is the number of nonzero components in x0x^0, and μ\mu is the largest entry in UU properly normalized: μ(U)=nmaxk,jUk,j\mu(U) = \sqrt{n} \cdot \max_{k,j} |U_{k,j}|. The smaller μ\mu, the fewer samples needed. The result holds for ``most'' sparse signals x0x^0 supported on a fixed (but arbitrary) set TT. Given TT, if the sign of x0x^0 for each nonzero entry on TT and the observed values of Ux0Ux^0 are drawn at random, the signal is recovered with overwhelming probability. Moreover, there is a sense in which this is nearly optimal since any method succeeding with the same probability would require just about this many samples
    corecore