8 research outputs found

    An Integrated Modelling, Debugging, and Visualization Environment for G12

    No full text
    We present G12IDE, a front-end for the G12 platform aimed at helping users create and work with constraint models in a manner independent from any underlying solver. G12IDE contains tools for writing and evaluating models using Zinc and provides a feature rich debugger for monitoring a running search process. Debugging a search, as opposed to debugging sequential code, requires concepts such as breakpoints and queries to be applied at a higher level than in standard debuggers. Our solution is to let users define special events which, once reached in a search, cause the debugger to halt and give back, possibly in a visual manner, useful information on the current state of the search. G12IDE also includes a number of visualisation tools for drawing graphs and trees, and additionally allows users to create arbitrary domain-specific visualisations, such as the drawing of a sequential plan when the constraint problem is in fact a planning problem. The inclusion of such powerful and flexible visualisation toolkit and its tight integration with the available debugging facilities is, to the best of our knowledge, completely novel

    Low contrast trip hazard avoidance using simulated prosthetic vision

    No full text
    Purpose: Perceiving trip hazards is critical for safe mobility and an important ability for prosthetic vision. Ground surface obstructions (e.g., bumps, steps, curbs) not marked by contrasting intensity or abrupt changes of depth can be difficult to perceive using standard prosthetic vision representations. These include representations conveying only the luminance (Intensity) or the relative depth of the environment (Depth), particularly with a low dynamic range visual representation. We have developed a novel system and method that estimates traversable space using binocular images to highlight trip hazards in a depth-based visual representation (Augmented Depth). We evaluate the effectiveness of augmented depth for mobility in the presence of ground obstacles using simulated prosthetic vision with 98 phosphenes and 8 noticeably different levels of brightness/size. The first generation retinal implant from Bionic Vision Australia will have 98 electrodes. Methods: : 2 normally-sighted (20/20, Pelli-Robson>=1.95) participants used a mobile artificial vision simulator set with 98 phosphenes centrally displayed identically to both eyes to navigate a mobility course. A shroud blocked their normal vision. Phosphenes representing traversable ground were uniformly rendered with low brightness; all other phosphenes were rendered as per the standard depth-based representation, scaled up to increase contrast. Augmented Depth was compared to Depth and Intensity representations. Participants traversed a 9mx3m straight corridor (dark floor, white walls), containing black ground obstacles which varied in quantity, size, and placement. Trial obstacle variables and visual representation presentation order were randomised. Participants each completed 5x2 hour sessions consisting of approximately equal numbers of traversals for each visual representation (Augmented Depth: n=79; Depth: n=80; Intensity: n=79). The primary outcome measure was the number of contacts/collisions with obstacles and walls. Institutional Ethics Board approval was obtained. Results: : Augmented depth (mean = 0.346) performed significantly better than Depth (mean = 0.735, p=0.021) and Intensity (mean = 0.812, p = 0.007) with fewer number of contacts compared to Depth and Intensity visual representations. Conclusions: Augmented Depth may be an effective representation for assisting mobility with current and future visual prostheses. Results show a significant advantage over intensity and depth-based representations for avoiding trip hazards

    Mobility experiments using simulated prosthetic vision with 98 phosphenes of limited dynamic range

    No full text
    Purpose: Using simulated prosthetic vision (SPV) and a mobility course including overhanging obstacles, we evaluate the effectiveness of a 98 phosphenes, low dynamic range retinal implant. (Bionic Vision Australia's first prototype retinal prosthesis will have 98 electrodes). Effectiveness was defined as a percentage of preferred walking speed (PPWS) larger than 50%.We investigated the performance of two visual representations: Intensity (visual field luminance); and, Depth (relative depth of the visual field). Depth provides an example of an alternative visual representation that is robust to low contrast environments. Methods: Four normally-sighted participants (20/20, Pelli-Robson>=1.95) traversed a mobility course using mobile SPV showing 98 phosphenes on a hexagonal grid and centrally displayed identically to both eyes. Each phosphene was rendered over 8 levels of brightness. The study had a double-blind randomised factorial. Results: In the test phase, mean PPWS for both Intensity (n=56) and Depth (n=110) was significantly larger than 50% (p<=0.028). For both Intensity and Depth, mean PPWS was significantly greater in the test phase compared to the training phase (p<=0.035); indicating that participants became more adept to both visual representations over time. Across all sessions, Intensity (n=104, mean PPWS=61%) and Depth (n=204, mean PPWS=50%) had significantly higher PPWS than no meaningful visual information (n=19, mean PPWS=27%, p<0.001). Intensity had significantly higher mean PPWS than Depth (p<0.001). Conclusions: Visual information representing scene contrast or depth cues using simulated prosthetic vision showing 98 phosphenes under low dynamic range conditions may be used for effective mobility

    Low contrast trip hazard avoidance with simulated prosthetic vision

    No full text
    Purpose: Perceiving trip hazards is critical for safe mobility and an important ability for prosthetic vision. Ground surface obstructions (e.g., bumps, steps, curbs) not marked by contrasting intensity or abrupt changes of depth can be difficult to perceive using standard prosthetic vision representations. These include representations conveying only the luminance (Intensity) or the relative depth of the environment (Depth), particularly with a low dynamic range visual representation. We have developed a novel system and method that estimates traversable space using binocular images to highlight trip hazards in a depth-based visual representation (Augmented Depth). We evaluate the effectiveness of augmented depth for mobility in the presence of ground obstacles using simulated prosthetic vision with 98 phosphenes and 8 noticeably different levels of brightness/size. The first generation retinal implant from Bionic Vision Australia will have 98 electrodes. Methods: 2 normally-sighted (20/20, Pelli-Robson>=1.95) participants used a mobile artificial vision simulator set with 98 phosphenes centrally displayed identically to both eyes to navigate a mobility course. A shroud blocked their normal vision. Phosphenes representing traversable ground were uniformly rendered with low brightness; all other phosphenes were rendered as per the standard depth-based representation, scaled up to increase contrast. Augmented Depth was compared to Depth and Intensity representations. Participants traversed a 9mx3m straight corridor (dark floor, white walls), containing black ground obstacles which varied in quantity, size, and placement. Trial obstacle variables and visual representation presentation order were randomised. Participants each completed 5x2 hour sessions consisting of approximately equal numbers of traversals for each visual representation (Augmented Depth: n=79; Depth: n=80; Intensity: n=79). The primary outcome measure was the number of contacts/collisions with obstacles and walls. Institutional Ethics Board approval was obtained. Results: Augmented depth (mean = 0.346) performed significantly better than Depth (mean = 0.735, p=0.021) and Intensity (mean = 0.812, p = 0.007) with fewer number of contacts compared to Depth and Intensity visual representations. Conclusions: Augmented Depth may be an effective representation for assisting mobility with current and future visual prostheses. Results show a significant advantage over intensity and depth-based representations for avoiding trip hazards

    Substituting depth for intensity and real-time phosphene rendering: Visual navigation under low vision conditions

    No full text
    Navigation and way finding including obstacle avoidance is difficult when visual perception is limited to low resolution, such as is currently available on a bionic eye. Depth visualisation may be a suitable alternative. Such an approach can be evaluate

    TIMECENTER Participants

    No full text
    Any software made available via TIMECENTER is provided “as is ” and without any express or implied warranties, including, without limitation, the implied warranty of merchantability and fitness for a particular purpose. The TIMECENTER icon on the cover combines two “arrows. ” These “arrows ” are letters in the so-called Rune alphabet used one millennium ago by the Vikings, as well as by their precedessors and successors. The Run

    Substituting depth for intensity and real-time phosphene rendering: visual navigation under low vision conditions

    No full text
    Navigation and way finding including obstacle avoidance is difficult when visual perception is limited to low resolution, such as is currently available on a bionic eye. Depth visualisation may be a suitable alternative. Such an approach can be evaluated using simulated phosphenes with a wearable mobile virtual reality kit. In this paper, we present two novel approaches: (i) an implementation of depth visualisation; and, (ii) novel methods for rapid rendering of simulated phosphenes with an empirical comparison between them. Our new software-based method for simulated phosphene rendering shows large speed improvements, facilitating the display in real-time of a large number of phosphenes with size and brightness dependent on pixel intensity, and with customised output dynamic range. Further, we describe the protocol, navigation environment and system used for visual navigation experiments to evaluate the use of depth on low resolution simulations of a bionic eye perceptual experience. Results for these experiments show that a depth-based representation is effective for navigation, and shows significant advantages over intensity-based approaches when overhanging obstacles are present. The results of the experiments were reported in [1], [2]
    corecore