389,182 research outputs found

    Real-time virtual sonography in gynecology & obstetrics. literature's analysis and case series

    Get PDF
    Fusion Imaging is a latest generation diagnostic technique, designed to combine ultrasonography with a second-tier technique such as magnetic resonance imaging and computer tomography. It has been mainly used until now in urology and hepatology. Concerning gynecology and obstetrics, the studies mostly focus on the diagnosis of prenatal disease, benign pathology and cervical cancer. We provided a systematic review of the literature with the latest publications regarding the role of Fusion technology in gynecological and obstetrics fields and we also described a case series of six emblematic patients enrolled from Gynecology Department of Sant ‘Andrea Hospital, “la Sapienza”, Rome, evaluated with Esaote Virtual Navigator equipment. We consider that Fusion Imaging could add values at the diagnosis of various gynecological and obstetrics conditions, but further studies are needed to better define and improve the role of this fascinating diagnostic tool

    Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images

    Full text link
    Recovering the 3D representation of an object from single-view or multi-view RGB images by deep neural networks has attracted increasing attention in the past few years. Several mainstream works (e.g., 3D-R2N2) use recurrent neural networks (RNNs) to fuse multiple feature maps extracted from input images sequentially. However, when given the same set of input images with different orders, RNN-based approaches are unable to produce consistent reconstruction results. Moreover, due to long-term memory loss, RNNs cannot fully exploit input images to refine reconstruction results. To solve these problems, we propose a novel framework for single-view and multi-view 3D reconstruction, named Pix2Vox. By using a well-designed encoder-decoder, it generates a coarse 3D volume from each input image. Then, a context-aware fusion module is introduced to adaptively select high-quality reconstructions for each part (e.g., table legs) from different coarse 3D volumes to obtain a fused 3D volume. Finally, a refiner further refines the fused 3D volume to generate the final output. Experimental results on the ShapeNet and Pix3D benchmarks indicate that the proposed Pix2Vox outperforms state-of-the-arts by a large margin. Furthermore, the proposed method is 24 times faster than 3D-R2N2 in terms of backward inference time. The experiments on ShapeNet unseen 3D categories have shown the superior generalization abilities of our method.Comment: ICCV 201

    A novel object tracking algorithm based on compressed sensing and entropy of information

    Get PDF
    Acknowledgments This research is supported by (1) the Ph.D. Programs Foundation of Ministry of Education of China under Grant no. 20120061110045, (2) the Science and Technology Development Projects of Jilin Province of China under Grant no. 20150204007G X, and (3) the Key Laboratory for Symbol Computation and Knowledge Engineering of the National Education Ministry of China.Peer reviewedPublisher PD

    Data fusion strategy for precise vehicle location for intelligent self-aware maintenance systems

    Get PDF
    Abstract— Nowadays careful measurement applications are handed over to Wired and Wireless Sensor Network. Taking the scenario of train location as an example, this would lead to an increase in uncertainty about position related to sensors with long acquisition times like Balises, RFID and Transponders along the track. We take into account the data without any synchronization protocols, for increase the accuracy and reduce the uncertainty after the data fusion algorithms. The case studies, we have analysed, derived from the needs of the project partners: train localization, head of an auger in the drilling sector localization and the location of containers of radioactive material waste in a reprocessing nuclear plant. They have the necessity to plan the maintenance operations of their infrastructure basing through architecture that taking input from the sensors, which are localization and diagnosis, maps and cost, to optimize the cost effectiveness and reduce the time of operation

    A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks

    Get PDF
    Multisensor fusion and consensus filtering are two fascinating subjects in the research of sensor networks. In this survey, we will cover both classic results and recent advances developed in these two topics. First, we recall some important results in the development ofmultisensor fusion technology. Particularly, we pay great attention to the fusion with unknown correlations, which ubiquitously exist in most of distributed filtering problems. Next, we give a systematic review on several widely used consensus filtering approaches. Furthermore, some latest progress on multisensor fusion and consensus filtering is also presented. Finally, conclusions are drawn and several potential future research directions are outlined.the Royal Society of the UK, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61304010, 11301118, and 61573246, the Hujiang Foundation of China under Grants C14002 and D15009, the Alexander von Humboldt Foundation of Germany, and the Innovation Fund Project for Graduate Student of Shanghai under Grant JWCXSL140

    Fusion of facial regions using color information in a forensic scenario

    Full text link
    Comunicación presentada en: 18th Iberoamerican Congress on Pattern Recognition, CIARP 2013; Havana; Cuba; 20-23 November 2013The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-41827-3_50This paper reports an analysis of the benefits of using color information on a region-based face recognition system. Three different color spaces are analysed (RGB, YCbCr, lαβ) in a very challenging scenario matching good quality mugshot images against video surveillance images. This scenario is of special interest for forensics, where examiners carry out a comparison of two face images using the global information of the faces, but paying special attention to each individual facial region (eyes, nose, mouth, etc.). This work analyses the discriminative power of 15 facial regions comparing both the grayscale and color information. Results show a significant improvement of performance when fusing several regions of the face compared to just using the whole face image. A further improvement of performance is achieved when color information is consideredThis work has been partially supported by contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), bio-Challenge (TEC2009-11186), Bio Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and "Cátedra UAM-Telefónica

    Atypical audiovisual speech integration in infants at risk for autism

    Get PDF
    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/− audio/ba/and the congruent visual/ba/− audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/− audio/ga/display compared with the congruent visual/ga/− audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism
    corecore