1,307,589 research outputs found

    Multiphase Geometric Couplings for the Segmentation of Neural Processes

    Get PDF
    The ability to constrain the geometry of deformable models for image segmentation can be useful when information about the expected shape or positioning of the objects in a scene is known a priori. An example of this occurs when segmenting neural cross sections in electron microscopy. Such images often contain multiple nested boundaries separating regions of homogeneous intensities. For these applications, multiphase level sets provide a partitioning framework that allows for the segmentation of multiple deformable objects by combining several level set functions. Although there has been much effort in the study of statistical shape priors that can be used to constrain the geometry of each partition, none of these methods allow for the direct modeling of geometric arrangements of partitions. In this paper, we show how to define elastic couplings between multiple level set functions to model ribbon-like partitions. We build such couplings using dynamic force fields that can depend on the image content and relative location and shape of the level set functions. To the best of our knowledge, this is the first work that shows a direct way of geometrically constraining multiphase level sets for image segmentation. We demonstrate the robustness of our method by comparing it with previous level set segmentation methods.Engineering and Applied Science

    Preservice Elementary Teachers Increase Descriptive Science Vocabulary by Making Descriptive Adjective Object Boxes

    Get PDF
    Descriptive vocabulary is needed for communication and mental processing of science observations. Elementary preservice teachers in a science methods class at a mid-sized public college in central New York State increased their descriptive vocabularies through a course assignment of making a descriptive adjective object box. This teaching material consists of a set of theme-related objects with corresponding cards housed in a box. The front of each card lists four descriptive adjectives that describe physical observations of one of the objects, with an image of the object on the reverse for self-checking. The student reads these descriptive words and attempts to locate the one object to which they all refer. Preservice teachers (N = 67; 8M, 59F; 3H, 2B, 1A, 61W) took identical pretests/posttests in which they wrote descriptive adjectives for four objects. During the intervention, they explored example boxes with activities and worked in pairs to create their own sets of materials. Participants increased words generated from 17.8 to 25.7 for the four objects. The grade level of words produced also increased from 2.9 to 3.8. Both increases were statistically significant with a very large effect size (1.84) for words generated and a medium effect size (0.35) for increase in grade level of vocabulary

    Calibration of quasi-static aberrations in exoplanet direct-imaging instruments with a Zernike phase-mask sensor. II. Concept validation with ZELDA on VLT/SPHERE

    Full text link
    Warm or massive gas giant planets, brown dwarfs, and debris disks around nearby stars are now routinely observed by dedicated high-contrast imaging instruments on large, ground-based observatories. These facilities include extreme adaptive optics (ExAO) and state-of-the-art coronagraphy to achieve unprecedented sensitivities for exoplanet detection and spectral characterization. However, differential aberrations between the ExAO sensing path and the science path represent a critical limitation for the detection of giant planets with a contrast lower than a few 10610^{-6} at very small separations (<0.3\as) from their host star. In our previous work, we proposed a wavefront sensor based on Zernike phase contrast methods to circumvent this issue and measure these quasi-static aberrations at a nanometric level. We present the design, manufacturing and testing of ZELDA, a prototype that was installed on VLT/SPHERE during its reintegration in Chile. Using the internal light source of the instrument, we performed measurements in the presence of Zernike or Fourier modes introduced with the deformable mirror. Our experimental and simulation results are consistent, confirming the ability of our sensor to measure small aberrations (<50 nm rms) with nanometric accuracy. We then corrected the long-lived non-common path aberrations in SPHERE based on ZELDA measurements. We estimated a contrast gain of 10 in the coronagraphic image at 0.2\as, reaching the raw contrast limit set by the coronagraph in the instrument. The simplicity of the design and its phase reconstruction algorithm makes ZELDA an excellent candidate for the on-line measurements of quasi-static aberrations during the observations. The implementation of a ZELDA-based sensing path on the current and future facilities (ELTs, future space missions) could ease the observation of the cold gaseous or massive rocky planets around nearby stars.Comment: 13 pages, 12 figures, A&A accepted on June 3rd, 2016. v2 after language editin

    Knowledge and Reasoning for Image Understanding

    Get PDF
    abstract: Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning. Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Towards Supporting Visual Question and Answering Applications

    Get PDF
    abstract: Visual Question Answering (VQA) is a new research area involving technologies ranging from computer vision, natural language processing, to other sub-fields of artificial intelligence such as knowledge representation. The fundamental task is to take as input one image and one question (in text) related to the given image, and to generate a textual answer to the input question. There are two key research problems in VQA: image understanding and the question answering. My research mainly focuses on developing solutions to support solving these two problems. In image understanding, one important research area is semantic segmentation, which takes images as input and output the label of each pixel. As much manual work is needed to label a useful training set, typical training sets for such supervised approaches are always small. There are also approaches with relaxed labeling requirement, called weakly supervised semantic segmentation, where only image-level labels are needed. With the development of social media, there are more and more user-uploaded images available on-line. Such user-generated content often comes with labels like tags and may be coarsely labelled by various tools. To use these information for computer vision tasks, I propose a new graphic model by considering the neighborhood information and their interactions to obtain the pixel-level labels of the images with only incomplete image-level labels. The method was evaluated on both synthetic and real images. In question answering, my research centers on best answer prediction, which addressed two main research topics: feature design and model construction. In the feature design part, most existing work discussed how to design effective features for answer quality / best answer prediction. However, little work mentioned how to design features by considering the relationship between answers of one given question. To fill this research gap, I designed new features to help improve the prediction performance. In the modeling part, to employ the structure of the feature space, I proposed an innovative learning-to-rank model by considering the hierarchical lasso. Experiments with comparison with the state-of-the-art in the best answer prediction literature have confirmed that the proposed methods are effective and suitable for solving the research task.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Medical Image Segmentation Combining Level Set Method and Deep Belief Networks

    Get PDF
    Medical image segmentation is an important step in medical image analysis, where the main goal is the precise delineation of organs and tumours from medical images. For instance there is evidence in the field that shows a positive correlation between the precision of these segmentations and the accuracy observed in classification systems that use these segmentations as their inputs. Over the last decades, a vast number of medical image segmentation models have been introduced, where these models can be divided into five main groups: 1) image-based approaches, 2) active contour methods, 3) machine learning techniques, 4) atlas-guided segmentation and registration and 5) hybrid models. Image-based approaches use only intensity value or texture for segmenting (i.e., thresholding technique) and they usually do not produce precise segmentation. Active contour methods can use an explicit representation (i.e., snakes) with the goal of minimizing an energy function that forces the contour to move towards strong edges and maintains the contour smoothness. The use of implicit representation in active contour methods (i.e., level set method) embeds the contour as zero level set of a higher dimensional surface (i.e., the curve representing the contour does not need to be parameterized as in the Snakes model). Although successful, the main issue with active contour methods is the fact that the energy function must contain terms describing all possible shape and appearance variations, which is a complicated task given that it is hard to design by hand all these terms. Also, this type of active contour methods may get stuck at image regions that do not belong to the object of interest. Machine learning techniques address this issue by automatically learning shape and appearance models using annotated training images. Nevertheless, in order to meet the high accuracy requirements of medical image analysis applications, machine learning methods usually need large and rich training sets and also face the complexity of the inference process. Atlas-guided segmentation and registration use an atlas image, which is constructed based on manually segmentation images. The new image is segmented by registering it with the atlas image. These techniques have been applied successfully in many applications, but they still face some issues, such as their ability to represent the variability of anatomical structure and scale in medical image, and the complexity of the registration algorithms. In this work, we propose a new hybrid segmentation approach by combining a level set method with a machine learning approach (deep belief network). Our main objective with this approach is to achieve segmentation accuracy results that are either comparable or better than the ones produced with machine learning methods, but using relatively smaller training sets. These weaker requirements on the size of training sets is compensated by the hand designed segmentation terms present in typical level set methods, that are used as prior information on the anatomy to be segmented (e.g., smooth contours, strong edges, etc.). In addition, we choose a machine learning methodology that typically requires smaller annotated training sets, compared to other methods proposed in this field. Specifically, we use deep belief networks, with training sets consisting to a large extent of un-annotated training images. In general, our hybrid segmentation approach uses the result produced by the deep belief network as a prior in the level set evolution. We validate this method on the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2009 left ventricle segmentation challenge database and on the Japanese Society of Radiological Technology (JSRT) lung segmentation dataset. The experiments show that our approach produces competitive results in the field in terms of segmentation accuracy. More specifically, we show that the use of our proposed methodology in a semi-automated segmentation system (i.e., using a manual initialization) produces the best result in the field in both databases above, and in the case of a fully automated system, our method shows results competitive with the current state of the art.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201

    Decorative composition tasks for painting as a means to develop students' creative personality (direction "Design")

    Get PDF
    The essay is dedicated to the problems of teaching painting at Design faculties of high schools and universities. The main task both in painting is to teach students to make a conscious choice of visual tools. Selection of pictorial means shall be determined by a figurative idea and shall contribute to the most expressional solution of a set problem. Approach to painting changes to become more analytical one, based on the laws of formal composition and color science. Special attention is paid to seeking for figurative solutions. In addition to educational activities students are challenged to perform their works at a high professional and artistic level and to develop their creative personality. In painting students shall create decorative compositions in the most free form provided that image objects are recognizable. Understanding of form elicitation patterns or image creation principles contributes to a more precise choice of figurative instruments and allows developing students’ creative abilities as required by the specifics of their professional education. . Purpose of the article: studies of painting teaching experience for efficiency and quality improvement, evaluation of disciplines that provide fundamental artistic training in development of designing skills and abilities of students. The method is based on the studies of General Painting Department of St. Petersburg Academy of Applied Arts named after Stieglitz as well as on the analysis of teaching methods of Russian classic academic school, in particular by N. Volkov and P. Chistyakov. Traditional methods of pedagogical studies have been used such as examination/observation, experience analysis and pedagogical experiments
    corecore