11 research outputs found

    Spectral Graphormer: Spectral Graph-based Transformer for Egocentric Two-Hand Reconstruction using Multi-View Color Images

    Full text link
    We propose a novel transformer-based framework that reconstructs two high fidelity hands from multi-view RGB images. Unlike existing hand pose estimation methods, where one typically trains a deep network to regress hand model parameters from single RGB image, we consider a more challenging problem setting where we directly regress the absolute root poses of two-hands with extended forearm at high resolution from egocentric view. As existing datasets are either infeasible for egocentric viewpoints or lack background variations, we create a large-scale synthetic dataset with diverse scenarios and collect a real dataset from multi-calibrated camera setup to verify our proposed multi-view image feature fusion strategy. To make the reconstruction physically plausible, we propose two strategies: (i) a coarse-to-fine spectral graph convolution decoder to smoothen the meshes during upsampling and (ii) an optimisation-based refinement stage at inference to prevent self-penetrations. Through extensive quantitative and qualitative evaluations, we show that our framework is able to produce realistic two-hand reconstructions and demonstrate the generalisation of synthetic-trained models to real data, as well as real-time AR/VR applications.Comment: Accepted to ICCV 202

    Surgical treatment of elderly patients with primary osteolytic atypical meningioma: a case report and review of the literature

    No full text
    Abstract Background Elderly patients with primary intracranial osteolytic and externally growing atypical meningiomas are rare and easy to be misdiagnosed. Recently, a patient with an atypical meningioma was treated in our department and analyzed the case by reviewing the historical literature. Case presentation We describe a 63-year-old female with primary intracranial osteolytic atypical meningioma at our neurosurgery department, and retrospectively reviewed previous literatures about its diagnosis, surgical treatment, pathological results, and clinical outcome. Simpson grade I resection was accomplished through a pterional approach. First-stage skull reconstruction was performed, and the patient underwent an uneventful recovery. Conclusions The final diagnosis of the primary osteolytic atypical meningioma is dependent on a pathological examination. First-stage skull reconstruction could avoid a secondary lesion and improve the patient’s quality of life

    A Case Report of Hemifacial Spasm Caused by Vestibular Schwannoma and Literature Review

    No full text
    Background: Most cases of hemifacial spasm result from mechanical compression at the root exit zone of the facial nerve by vascular loops, and only a few cases are caused by vestibular schwannoma. Case presentation: We report a case of symptomatic hemifacial spasm induced by a small vestibular schwannoma that was totally resected. A 64-year-old man was admitted to our department with a 14-month history of symptomatic right-sided hemifacial spasm. During the process of microvascular decompression, no definite vessel was found to compress the facial nerve. By further exploration of regions other than root exit zone, a small vestibular schwannoma compressing the internal auditory canal portion of facial nerve from the ventral side was discovered. Resection of the tumor was then conducted. The symptoms of hemifacial spasm disappeared immediately after surgery. Conclusions: We should be aware that magnetic resonance imaging is not always precise and perhaps misses some miniature lesions due to present image technique limitations. A small vestibular schwannoma might be the reason for HFS, although preoperative magnetic resonance tomography angiography showed possible vascular compression at the facial nerve root. More importantly, a full-length exploration of the facial nerve is in urgent need to find potential compression while performing microvascular decompression for HFS patients

    Spectral Graphormer:Spectral Graph-based Transformer for Egocentric Two-Hand Reconstruction using Multi-View Color Images

    Get PDF
    We propose a novel transformer-based framework that reconstructs two high fidelity hands from multi-view RGB images. Unlike existing hand pose estimation methods, where one typically trains a deep network to regress hand model parameters from single RGB image, we consider a more challenging problem setting where we directly regress the absolute root poses of two-hands with extended forearm at high resolution from egocentric view. As existing datasets are either infeasible for egocentric viewpoints or lack background variations, we create a large-scale synthetic dataset with diverse scenarios and collect a real dataset from multi-calibrated camera setup to verify our proposed multi-view image feature fusion strategy. To make the reconstruction physically plausible, we propose two strategies: (i) a coarse-to-fine spectral graph convolution decoder to smoothen the meshes during upsampling and (ii) an optimisation-based refinement stage at inference to prevent self-penetrations. Through extensive quantitative and qualitative evaluations, we show that our framework is able to produce realistic two-hand reconstructions and demonstrate the generalisation of synthetic-trained models to real data, as well as real-time AR/VR applications

    The Seventh Visual Object Tracking VOT2019 Challenge Results

    No full text
    The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).Funding Agencies|Slovenian research agencySlovenian Research Agency - Slovenia [J2-8175, P2-0214, P2-0094]; Czech Science Foundation Project GACR [P103/12/G084]; MURI project - MoD/DstlMURI; EPSRCEngineering &amp; Physical Sciences Research Council (EPSRC) [EP/N019415/1]; WASP; VR (ELLIIT, LAST, and NCNN); SSF (SymbiCloud); AIT Strategic Research Programme; Faculty of Computer Science, University of Ljubljana, Slovenia</p
    corecore