9,271 research outputs found

    A detection method of intersections for determining overlapping using active vision

    Get PDF
    Sometimes, the presence of objects difficult the observation of other neighboring objects. This is because part of the surface of an object occludes partially the surface of another, increasing the complexitiy in the recognition process. Therefore, the information which is acquired from scene to describe the objects is often incomplete and depends a great deal on the view point of the observation. Thus, when any real scene is observed, the regions and the boundaries which delimit and dissociate objects from others are not perceived easily. In this paper, a method to discern objects from others, delimiting where the surface of each object begins and finishes is presented. Really, here, we look for detecting the overlapping and occlusion zones of two or more objects which interact among each other in a same scene. This is very useful, on the one hand, to distinguish some objects from others when the features like texture colour and geometric form are not sufficient to separate them with a segmentation process. On the other hand, it is also important to identify occluded zones without a previous knowledge of the type of objects which are wished to recognize. The proposed approach is based on the detection of occluded zones by means of structured light patterns projected on the object surfaces in a scene. These light patterns determine certain discontinuities of the beam projections when they hit against the surfaces becoming deformed themselves. So that, such discontinuities are taken like zones of boundary of occlusion candidate regions

    Computation of protein geometry and its applications: Packing and function prediction

    Full text link
    This chapter discusses geometric models of biomolecules and geometric constructs, including the union of ball model, the weigthed Voronoi diagram, the weighted Delaunay triangulation, and the alpha shapes. These geometric constructs enable fast and analytical computaton of shapes of biomoleculres (including features such as voids and pockets) and metric properties (such as area and volume). The algorithms of Delaunay triangulation, computation of voids and pockets, as well volume/area computation are also described. In addition, applications in packing analysis of protein structures and protein function prediction are also discussed.Comment: 32 pages, 9 figure

    Neural Dynamics of Motion Grouping: From Aperture Ambiguity to Object Speed and Direction

    Full text link
    A neural network model of visual motion perception and speed discrimination is developed to simulate data concerning the conditions under which components of moving stimuli cohere or not into a global direction of motion, as in barberpole and plaid patterns (both Type 1 and Type 2). The model also simulates how the perceived speed of lines moving in a prescribed direction depends upon their orientation, length, duration, and contrast. Motion direction and speed both emerge as part of an interactive motion grouping or segmentation process. The model proposes a solution to the global aperture problem by showing how information from feature tracking points, namely locations from which unambiguous motion directions can be computed, can propagate to ambiguous motion direction points, and capture the motion signals there. The model does this without computing intersections of constraints or parallel Fourier and non-Fourier pathways. Instead, the model uses orientationally-unselective cell responses to activate directionally-tuned transient cells. These transient cells, in turn, activate spatially short-range filters and competitive mechanisms over multiple spatial scales to generate speed-tuned and directionally-tuned cells. Spatially long-range filters and top-down feedback from grouping cells are then used to track motion of featural points and to select and propagate correct motion directions to ambiguous motion points. Top-down grouping can also prime the system to attend a particular motion direction. The model hereby links low-level automatic motion processing with attention-based motion processing. Homologs of model mechanisms have been used in models of other brain systems to simulate data about visual grouping, figure-ground separation, and speech perception. Earlier versions of the model have simulated data about short-range and long-range apparent motion, second-order motion, and the effects of parvocellular and magnocellular LGN lesions on motion perception.Office of Naval Research (N00014-920J-4015, N00014-91-J-4100, N00014-95-1-0657, N00014-95-1-0409, N00014-91-J-0597); Air Force Office of Scientific Research (F4620-92-J-0225, F49620-92-J-0499); National Science Foundation (IRI-90-00530

    Building Movie Map -- A Tool for Exploring Areas in a City -- and its Evaluation

    Full text link
    We propose a new Movie Map system, with an interface for exploring cities. The system consists of four stages; acquisition, analysis, management, and interaction. In the acquisition stage, omnidirectional videos are taken along streets in target areas. Frames of the video are localized on the map, intersections are detected, and videos are segmented. Turning views at intersections are subsequently generated. By connecting the video segments following the specified movement in an area, we can view the streets better. The interface allows for easy exploration of a target area, and it can show virtual billboards of stores in the view. We conducted user studies to compare our system to the GSV in a scenario where users could freely move and explore to find a landmark. The experiment showed that our system had a better user experience than GSV

    Dublin City University video track experiments for TREC 2001

    Get PDF
    Dublin City University participated in the interactive search task and Shot Boundary Detection task* of the TREC Video Track. In the interactive search task experiment thirty people used three different digital video browsers to find video segments matching the given topics. Each user was under a time constraint of six minutes for each topic assigned to them. The purpose of this experiment was to compare video browsers and so a method was developed for combining independent users’ results for a topic into one set of results. Collated results based on thirty users are available herein though individual users’ and browsers’ results are currently unavailable for comparison. Our purpose in participating in this TREC track was to create the ground truth within the TREC framework, which will allow us to do direct browser performance comparisons

    Ear Biometrics Based on Geometrical Feature Extraction

    Get PDF
    Biometrics identification methods proved to be very efficient, more natural and easy for users than traditional methods of human identification. In fact, only biometrics methods truly identify humans, not keys and cards they posses or passwords they should remember. The future of biometrics will surely lead to systems based on image analysis as the data acquisition is very simple and requires only cameras, scanners or sensors. More importantly such methods could be passive, which means that the user does not have to take active part in the whole process or, in fact, would not even know that the process of identification takes place. There are many possible data sources for human identification systems, but the physiological biometrics seem to have many advantages over methods based on human behaviour. The most interesting human anatomical parts for such passive, physiological biometrics systems based on images acquired from cameras are face and ear. Both of those methods contain large volume of unique features that allow to distinctively identify many users and will be surely implemented into efficient biometrics systems for many applications. The article introduces to ear biometrics and presents its advantages over face biometrics in passive human identification systems. Then the geometrical method of feature extraction from human ear images in order to perform human identification is presented
    corecore