10,134 research outputs found

    Three-Dimensional Myoarchitecture of the Lower Esophageal Sphincter and Esophageal Hiatus Using Optical Sectioning Microscopy.

    Get PDF
    Studies to date have failed to reveal the anatomical counterpart of the lower esophageal sphincter (LES). We assessed the LES and esophageal hiatus morphology using a block containing the human LES and crural diaphragm, serially sectioned at 50 μm intervals and imaged at 8.2 μm/pixel resolution. A 3D reconstruction of the tissue block was reconstructed in which each of the 652 cross sectional images were also segmented to identify the boundaries of longitudinal (LM) and circular muscle (CM) layers. The CM fascicles on the ventral surface of LES are arranged in a helical/spiral fashion. On the other hand, the CM fascicles from the two sides cross midline on dorsal surface and continue as sling/oblique muscle on the stomach. Some of the LM fascicles of the esophagus leave the esophagus to enter into the crural diaphragm and the remainder terminate into the sling fibers of the stomach. The muscle fascicles of the right crus of diaphragm which form the esophageal hiatus are arranged like a "noose" around the esophagus. We propose that circumferential squeeze of the LES and crural diaphragm is generated by a unique myo-architectural design, each of which forms a "noose" around the esophagus

    CoMaL Tracking: Tracking Points at the Object Boundaries

    Full text link
    Traditional point tracking algorithms such as the KLT use local 2D information aggregation for feature detection and tracking, due to which their performance degrades at the object boundaries that separate multiple objects. Recently, CoMaL Features have been proposed that handle such a case. However, they proposed a simple tracking framework where the points are re-detected in each frame and matched. This is inefficient and may also lose many points that are not re-detected in the next frame. We propose a novel tracking algorithm to accurately and efficiently track CoMaL points. For this, the level line segment associated with the CoMaL points is matched to MSER segments in the next frame using shape-based matching and the matches are further filtered using texture-based matching. Experiments show improvements over a simple re-detect-and-match framework as well as KLT in terms of speed/accuracy on different real-world applications, especially at the object boundaries.Comment: 10 pages, 10 figures, to appear in 1st Joint BMTT-PETS Workshop on Tracking and Surveillance, CVPR 201

    An Empirical Evaluation of Visual Question Answering for Novel Objects

    Full text link
    We study the problem of answering questions about images in the harder setting, where the test questions and corresponding images contain novel objects, which were not queried about in the training data. Such setting is inevitable in real world-owing to the heavy tailed distribution of the visual categories, there would be some objects which would not be annotated in the train set. We show that the performance of two popular existing methods drop significantly (up to 28%) when evaluated on novel objects cf. known objects. We propose methods which use large existing external corpora of (i) unlabeled text, i.e. books, and (ii) images tagged with classes, to achieve novel object based visual question answering. We do systematic empirical studies, for both an oracle case where the novel objects are known textually, as well as a fully automatic case without any explicit knowledge of the novel objects, but with the minimal assumption that the novel objects are semantically related to the existing objects in training. The proposed methods for novel object based visual question answering are modular and can potentially be used with many visual question answering architectures. We show consistent improvements with the two popular architectures and give qualitative analysis of the cases where the model does well and of those where it fails to bring improvements.Comment: 11 pages, 4 figures, accepted in CVPR 2017 (poster
    • …
    corecore