8 research outputs found

    An Effective Ultrasound Video Communication System Using Despeckle Filtering and HEVC

    Get PDF
    The recent emergence of the high-efficiency video coding (HEVC) standard promises to deliver significant bitrate savings over current and prior video compression standards, while also supporting higher resolutions that can meet the clinical acquisition spatiotemporal settings. The effective application of HEVC to medical ultrasound necessitates a careful evaluation of strict clinical criteria that guarantee that clinical quality will not be sacrificed in the compression process. Furthermore, the potential use of despeckle filtering prior to compression provides for the possibility of significant additional bitrate savings that have not been previously considered. This paper provides a thorough comparison of the use of MPEG-2, H.263, MPEG-4, H.264/AVC, and HEVC for compressing atherosclerotic plaque ultrasound videos. For the comparisons, we use both subjective and objective criteria based on plaque structure and motion. For comparable clinical video quality, experimental evaluation on ten videos demonstrates that HEVC reduces bitrate requirements by as much as 33.2% compared to H.264/AVC and up to 71% compared to MPEG-2. The use of despeckle filtering prior to compression is also investigated as a method that can reduce bitrate requirements through the removal of higher frequency components without sacrificing clinical quality. Based on the use of three despeckle filtering methods with both H.264/AVC and HEVC, we find that prior filtering can yield additional significant bitrate savings. The best performing despeckle filter (DsFlsmv) achieves bitrate savings of 43.6% and 39.2% compared to standard nonfiltered HEVC and H.264/AVC encoding, respectively

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Towards a data-driven treatment of epilepsy: computational methods to overcome low-data regimes in clinical settings

    Get PDF
    Epilepsy is the most common neurological disorder, affecting around 1 % of the population. One third of patients with epilepsy are drug-resistant. If the epileptogenic zone can be localized precisely, curative resective surgery may be performed. However, only 40 to 70 % of patients remain seizure-free after surgery. Presurgical evaluation, which in part aims to localize the epileptogenic zone (EZ), is a complex multimodal process that requires subjective clinical decisions, often relying on a multidisciplinary team’s experience. Thus, the clinical pathway could benefit from data-driven methods for clinical decision support. In the last decade, deep learning has seen great advancements due to the improvement of graphics processing units (GPUs), the development of new algorithms and the large amounts of generated data that become available for training. However, using deep learning in clinical settings is challenging as large datasets are rare due to privacy concerns and expensive annotation processes. Methods to overcome the lack of data are especially important in the context of presurgical evaluation of epilepsy, as only a small proportion of patients with epilepsy end up undergoing surgery, which limits the availability of data to learn from. This thesis introduces computational methods that pave the way towards integrating data-driven methods into the clinical pathway for the treatment of epilepsy, overcoming the challenge presented by the relatively small datasets available. We used transfer learning from general-domain human action recognition to characterize epileptic seizures from video–telemetry data. We developed a software framework to predict the location of the epileptogenic zone given seizure semiologies, based on retrospective information from the literature. We trained deep learning models using self-supervised and semi-supervised learning to perform quantitative analysis of resective surgery by segmenting resection cavities on brain magnetic resonance images (MRIs). Throughout our work, we shared datasets and software tools that will accelerate research in medical image computing, particularly in the field of epilepsy

    Recent Advances in Forensic Anthropological Methods and Research

    Get PDF
    Forensic anthropology, while still relatively in its infancy compared to other forensic science disciplines, adopts a wide array of methods from many disciplines for human skeletal identification in medico-legal and humanitarian contexts. The human skeleton is a dynamic tissue that can withstand the ravages of time given the right environment and may be the only remaining evidence left in a forensic case whether a week or decades old. Improved understanding of the intrinsic and extrinsic factors that modulate skeletal tissues allows researchers and practitioners to improve the accuracy and precision of identification methods ranging from establishing a biological profile such as estimating age-at-death, and population affinity, estimating time-since-death, using isotopes for geolocation of unidentified decedents, radiology for personal identification, histology to assess a live birth, to assessing traumatic injuries and so much more

    Entropy in Image Analysis III

    Get PDF
    Image analysis can be applied to rich and assorted scenarios; therefore, the aim of this recent research field is not only to mimic the human vision system. Image analysis is the main methods that computers are using today, and there is body of knowledge that they will be able to manage in a totally unsupervised manner in future, thanks to their artificial intelligence. The articles published in the book clearly show such a future

    Gaze-Based Human-Robot Interaction by the Brunswick Model

    Get PDF
    We present a new paradigm for human-robot interaction based on social signal processing, and in particular on the Brunswick model. Originally, the Brunswick model copes with face-to-face dyadic interaction, assuming that the interactants are communicating through a continuous exchange of non verbal social signals, in addition to the spoken messages. Social signals have to be interpreted, thanks to a proper recognition phase that considers visual and audio information. The Brunswick model allows to quantitatively evaluate the quality of the interaction using statistical tools which measure how effective is the recognition phase. In this paper we cast this theory when one of the interactants is a robot; in this case, the recognition phase performed by the robot and the human have to be revised w.r.t. the original model. The model is applied to Berrick, a recent open-source low-cost robotic head platform, where the gazing is the social signal to be considered

    XXIII Congreso Argentino de Ciencias de la Computación - CACIC 2017 : Libro de actas

    Get PDF
    Trabajos presentados en el XXIII Congreso Argentino de Ciencias de la Computación (CACIC), celebrado en la ciudad de La Plata los días 9 al 13 de octubre de 2017, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Facultad de Informática de la Universidad Nacional de La Plata (UNLP).Red de Universidades con Carreras en Informática (RedUNCI
    corecore