26 research outputs found

    Autonomous visual navigation of an indoor environment using a parsimonious, insect inspired familiarity algorithm

    Get PDF
    The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects’ brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path’s end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.The authors received funding from a Research Council Faculty Investment Grant from the University of Oklahoma.Ye

    To the Editor: Reply to Sarmati

    Get PDF
    The letter by Sarmati et al. [1] presents data indicating that they were unable to find a significant correlation between human herpesvirus (HHV)–8 seropositivity and a history of spontaneous abortion in a group of 245 human immunodeficiency virus (HIV) type 1–seronegative women but that they did observe a correlation between high HHV-8 antibody titers (≥1:1280) and spontaneous abortion. Although it is possible that an increased risk of spontaneous abortion may be associated with active infection with HHV-8, at this point there is not enough evidence to support such an association. Given that several human herpes viruses are well-known agents of fetal and/or perinatal infection and that primary maternal herpesvirus infection prior to 20 weeks of gestation in some women has been associated with spontaneous abortion [2], and given the dearth of information about the clinical manifestations of HHV-8 infection, Sarmati et al. are correct in suggesting that this question should be investigated further

    Exploring the chemo-textural familiarity hypothesis for scorpion navigation

    No full text
    Volume: 45Start Page: 265End Page: 27

    Tracking GUI and sample performance on a simple training path.

    Get PDF
    <p>The GUI contains a control panel (left) for user-set parameters; a map of the room (bottom center) with training path (blue line), surveyed scenes (black dots) and recapitulated path (red line); a familiarity monitor (top center) showing the IDF of the currently inspected scene to all scenes in the training path; and circularized, pixelated views (right) of the current scene and the most familiar training path scene.</p

    Performance of algorithm relative to “real” training path.

    No full text
    <p>The Bloggie camera with panoramic lens was mounted on a lab cart at the same height as the snapshot landscape. We maneuvered the cart in a U-shaped training path starting near the fume hood (near transect 6, point 42), passing around the laboratory island, and ending at a bookshelf (near transect 2, point 12). A contour and quiver plot (<b>A</b>) is shown with the approximate location of the training path indicated by blue dashed line; a surface plot (<b>B</b>) is also shown for the same training path. We used a sensor resolution of 40x40, 10 gray levels pixel depth, and 360 angles of inspection to generate these plots. <b>C</b> The scene familiarity algorithm was used to recapitulate the path by identifying the best-matched scene (rotated to the best-matched angle) from the snapshot landscape to the scenes acquired during the training path video (white dots = surveyed scenes; red line = recapitulated path).</p

    Auto-tracking from lab to adjacent hallway.

    No full text
    <p><b>A</b> The visual landscape was extended through the doorway and into the adjacent hallway using the same train track system described in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0153706#pone.0153706.g001" target="_blank">Fig 1</a>. A complex training path (blue line) started near transect 30, point 5 in the laboratory and made a looping path through the room, doorway, and hallway before ending near transect 55, point 15. The red line is the recapitulated path (in progress) and the blue dots are the previously inspected points. The familiarity monitor at top right shows absolute scene differences of the currently inspected scene to all scenes in the training path memory and the images below show the current scene and the most familiar scene in memory, both pixelated at 80x80. <b>B</b> Surface plots and (<b>C</b>) contour plots of the absolute scene differences in the room and hall relative to the training path show distinct catchment valleys of image similarity along the entire training route. Red arrows (quiver plots) below each plot indicate the best angle of rotation (arrow direction) and strength of image match (arrow length) at each point.</p

    Schematic representation of the scene familiarity tracking algorithm.

    No full text
    <p><b>A</b> Flowchart showing an overview of image acquisition and the workflow of the scene familiarity algorithm. <b>B</b> A simplified version of how the algorithm selects the direction it will move in the landscape. Each gray circle in the 5 x 5 array represents a captured image. The training path consists of three points, labeled U-C-E and linked with a gray line. A thick black ring represents the autonomous agent. Black lines indicating its considered views (1–5) are shown emanating outward. Gray lines around the ring indicate the agent’s rotational scanning capabilities. Steps 1–4 highlight a series of steps taken by the autonomous agent in a hypothetical example of path recapitulation within the landscape of surveyed images. In step 1, the agent is placed within the landscape two images to the right of training path image U. Step 2 shows the agent after analyzing its considered views, chooses view 2 as the lowest representative summed absolute image difference (SAD) and moves from its position in step 1 in the direction of image C. This decision is highlighted with a series of representative images. Shown is the designated training path image C and the 5 considered views as well as calculated SAD values for each. Step 3 and 4 shows the agent finding and then progressing to the end of the training path. Once at the end of the path, the agent is unable to progress, as there is no recognition of a direction for the next image. The agent therefore begins a series of short movements back and forth around image E.</p

    Average IDF and RIDF plots.

    No full text
    <p><b>A</b> We produced volcano plots comparing normalized scene differences (IDF) of each scene to all 1800 scenes in the room. Slices were taken through each plot along the north-south and east-west axes. Each slice was then divided at its focal point, with the falling phase of each slice inverted and aligned with the rising phase. This produced four rising curves for each of the 1800 volcanoes, or 7200 separate vectors for the entire room. The family of curves shows average values of these 7200 vectors with distance from the focal scene for each of the eight sensor resolutions (all at 10 levels of gray). <b>B</b> Plot of p50 values in meters (= points x 12.7 m) from the focal scene for the eight sampled resolutions. Dotted line is a 6<sup>th</sup>-order polynomial fitted to the points (R<sup>2</sup> = 0.99996). <b>C</b> RIDF plots were produced for all 1800 scenes at 360 degrees for the eight sampled sensor resolutions (all at 10 levels of gray). Only the first 40 degrees are shown because the curves remain flat to 180 degrees. The plots were produced as described in A, except that the total number of vectors was 3600. <b>D</b> Plot of p50 values in degrees for the eight sampled resolutions. Dotted line is a 6<sup>th</sup>-order polynomial fitted to the points (R<sup>2</sup> = 0.99932).</p

    Comparison of hall survey images to all corridor and outdoor scenes.

    No full text
    <p><b>A</b> All scenes were extracted from the corridor and outdoor videos (30 frames per second, converted to 100 levels of gray, enhanced via histeq, circularized, and pixelated to 100x100). The scenes were stacked in a MATLAB structured matrix and bracketed at the beginning with the 900 hall and 1800 laboratory survey scenes. Each corridor and external video (other than RH F1) consisted of two videos traversing the length in each direction. In all, the matrix consisted of 25,703 scenes. The top trace shows SAD comparisons of all scenes to scene 500 (which is within the hall survey set). The middle trace is a randomized plot of the same result (scene 500 is indicated in red). The lower trace is a rank-ordered plot of the same result. The values within the dashed red box are expanded in <b>B</b> and relevant scene numbers from the unsorted matrix are indicated on the X-axis. The markers are color coded, with darker gray levels indicating greater image similarity. These shades are carried over to <b>C,</b> where gray-shaded boxes represent the 8 most similar scenes around scene 500 (left-most red box). The other sets of boxes in C represent 9 additional, randomly selected focal scenes (red) taken from the 900 hall survey scenes and subjected to the same analysis; their 8 most similar neighbors are indicated in gray.</p

    Corridor scene analysis.

    No full text
    <p>Panoramic videos along the length of 11 building corridors and one campus walkway were obtained with the Bloggie camera mounted at the same height (128 cm) as in the laboratory and hall surveys. For each corridor or walkway, a single video was made by pushing the cart with the Bloggie mounted in a straight line path down the center of the corridor or walkway at a constant speed. We used MATLAB to extract 50 evenly spaced scenes from each video, convert the scenes to 100 levels of gray, enhance the image contrast with the histeq function, and circularize and pixelate each scene to 100x100. The contour plots next to the still image of each corridor show cross-correlations of absolute scene difference values (with colors from blue to red indicating increasing scene difference values). The line graphs are plots of the mean SAD (± SD) for the nearest 25 scenes to each scene. We developed a rough index of the probability of aliasing by subtracting the SAD value of the nearest scene to the focal scene from the lowest local minimum within the set. These indices are shown in red in the upper right of each graph, with the highest number representing the lowest probability of aliasing. The corridors are arranged in descending order of this index. The outside walkway video produced no local minima and was deemed the most visually rich environment.</p
    corecore