34,923 research outputs found
Recommended from our members
Example-based video color grading
In most professional cinema productions, the color palette of the movie is painstakingly adjusted by a team of skilled colorists -- through a process referred to as color grading -- to achieve a certain visual look. The time and expertise required to grade a video makes it difficult for amateurs to manipulate the colors of their own video clips. In this work, we present a method that allows a user to transfer the color palette of a model video clip to their own video sequence. We estimate a per-frame color transform that maps the color distributions in the input video sequence to that of the model video clip. Applying this transformation naively leads to artifacts such as bleeding and flickering. Instead, we propose a novel differential-geometry-based scheme that interpolates these transformations in a manner that minimizes their curvature, similarly to curvature flows. In addition, we automatically determine a set of keyframes that best represent this interpolated transformation curve, and can be used subsequently, to manually refine the color grade. We show how our method can successfully transfer color palettes between videos for a range of visual styles and a number of input video clips.Engineering and Applied Science
Automatic facial analysis for objective assessment of facial paralysis
Facial Paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann Scale. Experiments show the Radial Basis Function (RBF) Neural Network to have superior performance
Qualitative grading of aortic regurgitation: a pilot study comparing CMR 4D flow and echocardiography.
Over the past 10 years there has been intense research in the development of volumetric visualization of intracardiac flow by cardiac magnetic resonance (CMR).This volumetric time resolved technique called CMR 4D flow imaging has several advantages over standard CMR. It offers anatomical, functional and flow information in a single free-breathing, ten-minute acquisition. However, the data obtained is large and its processing requires dedicated software. We evaluated a cloud-based application package that combines volumetric data correction and visualization of CMR 4D flow data, and assessed its accuracy for the detection and grading of aortic valve regurgitation using transthoracic echocardiography as reference. Between June 2014 and January 2015, patients planned for clinical CMR were consecutively approached to undergo the supplementary CMR 4D flow acquisition. Fifty four patients(median age 39 years, 32 males) were included. Detection and grading of the aortic valve regurgitation using CMR4D flow imaging were evaluated against transthoracic echocardiography. The agreement between 4D flow CMR and transthoracic echocardiography for grading of aortic valve regurgitation was good (j = 0.73). To identify relevant,more than mild aortic valve regurgitation, CMR 4D flow imaging had a sensitivity of 100 % and specificity of 98 %. Aortic regurgitation can be well visualized, in a similar manner as transthoracic echocardiography, when using CMR 4D flow imaging
Asynchronous video-otoscopy with a telehealth facilitator
Objective: The study investigated whether video-otoscopic images taken by a telehealth clinic facilitator are sufficient for accurate asynchronous diagnosis by an otolaryngologist within a heterogeneous population.
Subjects and Methods: A within-subject comparative design was used with 61 adults recruited from patients of a primary healthcare clinic. The telehealth clinic facilitator had no formal healthcare training. On-site otoscopic examination performed by the otolaryngologist was considered the gold standard diagnosis. A single video-otoscopic image was recorded by the otolaryngologist and facilitator from each ear, and the images were uploaded to a secure server. Images were assigned random numbers by another investigator, and 6 weeks later the otolaryngologist accessed the server, rated each image, and made a diagnosis without participant demographic or medical history.
Results: A greater percentage of images acquired by the otolaryngologist (83.6%) were graded as acceptable and excellent, compared with images recorded by the facilitator (75.4%). Diagnosis could not be made from 10.0% of the video-otoscopic images recorded by the facilitator compared with 4.2% taken by the otolaryngologist. A moderate concordance was measured between asynchronous diagnosis made from video-otoscopic images acquired by the otolaryngologist and facilitator (kappa = 0.596). The sensitivity for video-otoscopic images acquired by the otolaryngologist and the facilitator was 0.80 and 0.91, respectively. Specificity for images acquired by the otolaryngologist and the facilitator was 0.85 and 0.89, respectively, with a diagnostic odds ratio of 41.0 using images acquired by the otolaryngologist and 46.0 using images acquired by the facilitator.
Conclusions: A trained telehealth facilitator can provide a platform for asynchronous diagnosis of otological status using video-otoscopy in underserved primary healthcare settings
- …