345 research outputs found

    The elusive nature of the blocking effect: 15 failures to replicate

    Get PDF
    With the discovery of the blocking effect, learning theory took a huge leap forward, because blocking provided a crucial clue that surprise is what drives learning. This in turn stimulated the development of novel association-formation theories of learning. Eventually, the ability to explain blocking became nothing short of a touchstone for the validity of any theory of learning, including propositional and other nonassociative theories. The abundance of publications reporting a blocking effect and the importance attributed to it suggest that it is a robust phenomenon. Yet, in the current article we report 15 failures to observe a blocking effect despite the use of procedures that are highly similar or identical to those used in published studies. Those failures raise doubts regarding the canonical nature of the blocking effect and call for a reevaluation of the central status of blocking in theories of learning. They may also illustrate how publication bias influences our perspective toward the robustness and reliability of seemingly established effects in the psychological literature

    Deep Spatiotemporal Clutter Filtering of Transthoracic Echocardiographic Images Using a 3D Convolutional Auto-Encoder

    Full text link
    This study presents a deep convolutional auto-encoder network for filtering reverberation artifacts, from transthoracic echocardiographic (TTE) image sequences. Given the spatiotemporal nature of these artifacts, the filtering network was built using 3D convolutional layers to suppress the clutter patterns throughout the cardiac cycle. The network was designed by taking advantage of: i) an attention mechanism to focus primarily on cluttered regions and ii) residual learning to preserve fine structures of the image frames. To train the deep network, a diverse set of artifact patterns was simulated and the simulated patterns were superimposed onto artifact-free ultra-realistic synthetic TTE sequences of six ultrasound vendors to generate input of the filtering network. The artifact-free sequences served as ground-truth. Performance of the filtering network was evaluated using unseen synthetic as well as in-vivo artifactual sequences. Satisfactory results obtained using the latter dataset confirmed the good generalization performance of the proposed network which was trained using the synthetic sequences and simulated artifact patterns. Suitability of the clutter-filtered sequences for further processing was assessed by computing segmental strain curves from them. The results showed that the large discrepancy between the strain profiles computed from the cluttered segments and their corresponding segments in the clutter-free images was significantly reduced after filtering the sequences using the proposed network. The trained deep network could process an artifactual TTE sequence in a fraction of a second and can be used for real-time clutter filtering. Moreover, it can improve the precision of the clinical indexes that are computed from the TTE sequences. The source code of the proposed method is available at: https://github.com/MahdiTabassian/Deep-Clutter-Filtering/tree/main.Comment: 18 pages, 14 figure

    DEEPBEAS3D: Deep Learning and B-Spline Explicit Active Surfaces

    Full text link
    Deep learning-based automatic segmentation methods have become state-of-the-art. However, they are often not robust enough for direct clinical application, as domain shifts between training and testing data affect their performance. Failure in automatic segmentation can cause sub-optimal results that require correction. To address these problems, we propose a novel 3D extension of an interactive segmentation framework that represents a segmentation from a convolutional neural network (CNN) as a B-spline explicit active surface (BEAS). BEAS ensures segmentations are smooth in 3D space, increasing anatomical plausibility, while allowing the user to precisely edit the 3D surface. We apply this framework to the task of 3D segmentation of the anal sphincter complex (AS) from transperineal ultrasound (TPUS) images, and compare it to the clinical tool used in the pelvic floor disorder clinic (4D View VOCAL, GE Healthcare; Zipf, Austria). Experimental results show that: 1) the proposed framework gives the user explicit control of the surface contour; 2) the perceived workload calculated via the NASA-TLX index was reduced by 30% compared to VOCAL; and 3) it required 7 0% (170 seconds) less user time than VOCAL (p< 0.00001)Comment: 4 pages, 3 figures, 1 table, conferenc

    Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalising neural network

    Get PDF
    Segmentation of the levator hiatus in ultrasound allows to extract biometrics which are of importance for pelvic floor disorder assessment. In this work, we present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a 2D image extracted from a 3D ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalising activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction and rest, all labelled by three operators, is used for training and evaluation in a leave-one-patient-out cross-validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalisation. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semi-automatic approach

    Three-dimensional myocardial strain estimation from volumetric ultrasound: experimental validation in an animal model

    Get PDF
    Although real-time three-dimensional echocardiography has the potential to allow for more accurate assessment of global and regional ventricular dynamics compared to the more traditional two-dimensional ultrasound examinations, it still requires rigorous testing and validation against other accepted techniques should it breakthrough as a standard examination in routine clinical practice. Very few studies have looked at a validation of regional functional indices in an in-vivo context. The aim of the present study therefore was to validate a previously proposed 3D strain estimation-method based on elastic registration of subsequent volumes on a segmental level in an animal model. Volumetric images were acquired with a GE Vivid7 ultrasound system in five open-chest sheep instrumented with ultrasonic microcrystals. Radial (epsilon(RR)), longitudinal (epsilon(LL)) and circumferential strain (epsilon(CC)) were estimated during four stages: at rest, during esmolol and dobutamine infusion, and during acute ischemia. Moderate correlations for epsilon(LL) (r=0.63; p<0.01) and epsilon(CC) (r=0.60; p=0.01) were obtained, whereas no significant radial correlation was found. These findings are comparable to the performance of the current state-of-the-art commercial 3D speckle tracking methods

    The role of the image phase in cardiac strain imaging

    Get PDF
    International audienceThis paper reviews our most recent contributions in the field of cardiac deformation imaging, which includes a motion estimation framework based on the conservation of the image phase over time and an open pipeline to benchmark algorithms for cardiac strain imaging in 2D and 3D ultrasound. The paper also shows an original evaluation of the proposed motion estimation technique based on the new benchmarking pipeline
    • …
    corecore