352,886 research outputs found

    A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks

    Get PDF
    Multisensor fusion and consensus filtering are two fascinating subjects in the research of sensor networks. In this survey, we will cover both classic results and recent advances developed in these two topics. First, we recall some important results in the development ofmultisensor fusion technology. Particularly, we pay great attention to the fusion with unknown correlations, which ubiquitously exist in most of distributed filtering problems. Next, we give a systematic review on several widely used consensus filtering approaches. Furthermore, some latest progress on multisensor fusion and consensus filtering is also presented. Finally, conclusions are drawn and several potential future research directions are outlined.the Royal Society of the UK, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61304010, 11301118, and 61573246, the Hujiang Foundation of China under Grants C14002 and D15009, the Alexander von Humboldt Foundation of Germany, and the Innovation Fund Project for Graduate Student of Shanghai under Grant JWCXSL140

    Determining the neurotransmitter concentration profile at active synapses

    Get PDF
    Establishing the temporal and concentration profiles of neurotransmitters during synaptic release is an essential step towards understanding the basic properties of inter-neuronal communication in the central nervous system. A variety of ingenious attempts has been made to gain insights into this process, but the general inaccessibility of central synapses, intrinsic limitations of the techniques used, and natural variety of different synaptic environments have hindered a comprehensive description of this fundamental phenomenon. Here, we describe a number of experimental and theoretical findings that has been instrumental for advancing our knowledge of various features of neurotransmitter release, as well as newly developed tools that could overcome some limits of traditional pharmacological approaches and bring new impetus to the description of the complex mechanisms of synaptic transmission

    Appearance-and-Relation Networks for Video Classification

    Full text link
    Spatiotemporal feature learning in videos is a fundamental problem in computer vision. This paper presents a new architecture, termed as Appearance-and-Relation Network (ARTNet), to learn video representation in an end-to-end manner. ARTNets are constructed by stacking multiple generic building blocks, called as SMART, whose goal is to simultaneously model appearance and relation from RGB input in a separate and explicit manner. Specifically, SMART blocks decouple the spatiotemporal learning module into an appearance branch for spatial modeling and a relation branch for temporal modeling. The appearance branch is implemented based on the linear combination of pixels or filter responses in each frame, while the relation branch is designed based on the multiplicative interactions between pixels or filter responses across multiple frames. We perform experiments on three action recognition benchmarks: Kinetics, UCF101, and HMDB51, demonstrating that SMART blocks obtain an evident improvement over 3D convolutions for spatiotemporal feature learning. Under the same training setting, ARTNets achieve superior performance on these three datasets to the existing state-of-the-art methods.Comment: CVPR18 camera-ready version. Code & models available at https://github.com/wanglimin/ARTNe

    Atypical audiovisual speech integration in infants at risk for autism

    Get PDF
    The language difficulties often seen in individuals with autism might stem from an inability to integrate audiovisual information, a skill important for language development. We investigated whether 9-month-old siblings of older children with autism, who are at an increased risk of developing autism, are able to integrate audiovisual speech cues. We used an eye-tracker to record where infants looked when shown a screen displaying two faces of the same model, where one face is articulating/ba/and the other/ga/, with one face congruent with the syllable sound being presented simultaneously, the other face incongruent. This method was successful in showing that infants at low risk can integrate audiovisual speech: they looked for the same amount of time at the mouths in both the fusible visual/ga/− audio/ba/and the congruent visual/ba/− audio/ba/displays, indicating that the auditory and visual streams fuse into a McGurk-type of syllabic percept in the incongruent condition. It also showed that low-risk infants could perceive a mismatch between auditory and visual cues: they looked longer at the mouth in the mismatched, non-fusible visual/ba/− audio/ga/display compared with the congruent visual/ga/− audio/ga/display, demonstrating that they perceive an uncommon, and therefore interesting, speech-like percept when looking at the incongruent mouth (repeated ANOVA: displays x fusion/mismatch conditions interaction: F(1,16) = 17.153, p = 0.001). The looking behaviour of high-risk infants did not differ according to the type of display, suggesting difficulties in matching auditory and visual information (repeated ANOVA, displays x conditions interaction: F(1,25) = 0.09, p = 0.767), in contrast to low-risk infants (repeated ANOVA: displays x conditions x low/high-risk groups interaction: F(1,41) = 4.466, p = 0.041). In some cases this reduced ability might lead to the poor communication skills characteristic of autism

    Real-time virtual sonography in gynecology & obstetrics. literature's analysis and case series

    Get PDF
    Fusion Imaging is a latest generation diagnostic technique, designed to combine ultrasonography with a second-tier technique such as magnetic resonance imaging and computer tomography. It has been mainly used until now in urology and hepatology. Concerning gynecology and obstetrics, the studies mostly focus on the diagnosis of prenatal disease, benign pathology and cervical cancer. We provided a systematic review of the literature with the latest publications regarding the role of Fusion technology in gynecological and obstetrics fields and we also described a case series of six emblematic patients enrolled from Gynecology Department of Sant ‘Andrea Hospital, “la Sapienza”, Rome, evaluated with Esaote Virtual Navigator equipment. We consider that Fusion Imaging could add values at the diagnosis of various gynecological and obstetrics conditions, but further studies are needed to better define and improve the role of this fascinating diagnostic tool
    corecore