236,242 research outputs found
Recommended from our members
Nerve-targeted probes for fluorescence-guided intraoperative imaging.
A fundamental goal of many surgeries is nerve preservation, as inadvertent injury can lead to patient morbidity including numbness, pain, localized paralysis and incontinence. Nerve identification during surgery relies on multiple parameters including anatomy, texture, color and relationship to surrounding structures using white light illumination. We propose that fluorescent labeling of nerves can enhance the contrast between nerves and adjacent tissue during surgery which may lead to improved outcomes. Methods: Nerve binding peptide sequences including HNP401 were identified by phage display using selective binding to dissected nerve tissue. Peptide dye conjugates including FAM-HNP401 and structural variants were synthesized and screened for nerve binding after topical application on fresh rodent and human tissue and in-vivo after systemic IV administration into both mice and rats. Nerve to muscle contrast was quantified by measuring fluorescent intensity after topical or systemic administration of peptide dye conjugate. Results: Peptide dye conjugate FAM-HNP401 showed selective binding to human sural nerve with 10.9x fluorescence signal intensity (1374.44 ± 425.96) compared to a previously identified peptide FAM-NP41 (126.17 ± 61.03). FAM-HNP401 showed nerve-to-muscle contrast of 3.03 ± 0.57. FAM-HNP401 binds and highlight multiple human peripheral nerves including lower leg sural, upper arm medial antebrachial as well as autonomic nerves isolated from human prostate. Conclusion: Phage display has identified a novel peptide that selectively binds to ex-vivo human nerves and in-vivo using rodent models. FAM-HNP401 or an optimized variant could be translated for use in a clinical setting for intraoperative identification of human nerves to improve visualization and potentially decrease the incidence of intra-surgical nerve injury
A Novel Framework for Highlight Reflectance Transformation Imaging
We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa
A deep learning framework for quality assessment and restoration in video endoscopy
Endoscopy is a routine imaging technique used for both diagnosis and
minimally invasive surgical treatment. Artifacts such as motion blur, bubbles,
specular reflections, floating objects and pixel saturation impede the visual
interpretation and the automated analysis of endoscopy videos. Given the
widespread use of endoscopy in different clinical applications, we contend that
the robust and reliable identification of such artifacts and the automated
restoration of corrupted video frames is a fundamental medical imaging problem.
Existing state-of-the-art methods only deal with the detection and restoration
of selected artifacts. However, typically endoscopy videos contain numerous
artifacts which motivates to establish a comprehensive solution.
We propose a fully automatic framework that can: 1) detect and classify six
different primary artifacts, 2) provide a quality score for each frame and 3)
restore mildly corrupted frames. To detect different artifacts our framework
exploits fast multi-scale, single stage convolutional neural network detector.
We introduce a quality metric to assess frame quality and predict image
restoration success. Generative adversarial networks with carefully chosen
regularization are finally used to restore corrupted frames.
Our detector yields the highest mean average precision (mAP at 5% threshold)
of 49.0 and the lowest computational time of 88 ms allowing for accurate
real-time processing. Our restoration models for blind deblurring, saturation
correction and inpainting demonstrate significant improvements over previous
methods. On a set of 10 test videos we show that our approach preserves an
average of 68.7% which is 25% more frames than that retained from the raw
videos.Comment: 14 page
Stereo Computation for a Single Mixture Image
This paper proposes an original problem of \emph{stereo computation from a
single mixture image}-- a challenging problem that had not been researched
before. The goal is to separate (\ie, unmix) a single mixture image into two
constitute image layers, such that the two layers form a left-right stereo
image pair, from which a valid disparity map can be recovered. This is a
severely illposed problem, from one input image one effectively aims to recover
three (\ie, left image, right image and a disparity map). In this work we give
a novel deep-learning based solution, by jointly solving the two subtasks of
image layer separation as well as stereo matching. Training our deep net is a
simple task, as it does not need to have disparity maps. Extensive experiments
demonstrate the efficacy of our method.Comment: Accepted by European Conference on Computer Vision (ECCV) 201
- …