45 research outputs found

    Multi-rover testbed for teleconducted and autonomous surveillance, reconnaissance, and exploration

    Get PDF
    At Caltech's Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu) an outdoor multi-rover testbed has been developed that allows for near real-time interactive or automatic control from anywhere in the world via the Internet. It enables the implementation, field-testing, and validation of algorithms/software and strategies for navigation, exploration, feature extraction, anomaly detection, and target prioritization with applications in planetary exploration, security surveillance, reconnaissance of disaster areas, military reconnaissance, and delivery of lethal force such as explosives for urban warfare. Several rover platforms have been developed, enabling testing of cooperative multi-rover scenarios (e.g., inter-rover communication/coordination) and distributed exploration of operational areas

    Multi-agent autonomous system and method

    Get PDF
    A method of controlling a plurality of crafts in an operational area includes providing a command system, a first craft in the operational area coupled to the command system, and a second craft in the operational area coupled to the command system. The method further includes determining a first desired destination and a first trajectory to the first desired destination, sending a first command from the command system to the first craft to move a first distance along the first trajectory, and moving the first craft according to the first command. A second desired destination and a second trajectory to the second desired destination are determined and a second command is sent from the command system to the second craft to move a second distance along the second trajectory

    Smart ophthalmics: the future in tele-ophthalmology has arrived

    Get PDF
    Smart Ophthalmics^© extends ophthalmic healthcare to people who operate/live in austere environments (e.g., military, third world, natural disaster), or are geographically dispersed (e.g., rural populations), where time, cost, and the possibility of travel/transportation make access to even adequate medical care difficult, if at all possible. Operators attach optical devices that act as ophthalmic examination extensions to smartphones and run custom apps to perform examinations of specific areas of the eye. The smartphone apps submit over wireless networks the collected examination data to a smart remote expert system, which provides in-depth medical analyses that are sent back in near real-time to the operators for subsequent triage

    Multi-agent autonomous system

    Get PDF
    A multi-agent autonomous system for exploration of hazardous or inaccessible locations. The multi-agent autonomous system includes simple surface-based agents or craft controlled by an airborne tracking and command system. The airborne tracking and command system includes an instrument suite used to image an operational area and any craft deployed within the operational area. The image data is used to identify the craft, targets for exploration, and obstacles in the operational area. The tracking and command system determines paths for the surface-based craft using the identified targets and obstacles and commands the craft using simple movement commands to move through the operational area to the targets while avoiding the obstacles. Each craft includes its own instrument suite to collect information about the operational area that is transmitted back to the tracking and command system. The tracking and command system may be further coupled to a satellite system to provide additional image information about the operational area and provide operational and location commands to the tracking and command system

    Tier-scalable reconnaissance: the challenge of sensor optimization, sensor deployment, sensor fusion, and sensor interoperability

    Get PDF
    Robotic reconnaissance operations are called for in extreme environments, not only those such as space, including planetary atmospheres, surfaces, and subsurfaces, but also in potentially hazardous or inaccessible operational areas on Earth, such as mine fields, battlefield environments, enemy occupied territories, terrorist infiltrated environments, or areas that have been exposed to biochemical agents or radiation. Real time reconnaissance enables the identification and characterization of transient events. A fundamentally new mission concept for tier-scalable reconnaissance of operational areas, originated by Fink et al., is aimed at replacing the engineering and safety constrained mission designs of the past. The tier-scalable paradigm integrates multi-tier (orbit atmosphere surface/subsurface) and multi-agent (satellite UAV/blimp surface/subsurface sensing platforms) hierarchical mission architectures, introducing not only mission redundancy and safety, but also enabling and optimizing intelligent, less constrained, and distributed reconnaissance in real time. Given the mass, size, and power constraints faced by such a multi-platform approach, this is an ideal application scenario for a diverse set of MEMS sensors. To support such mission architectures, a high degree of operational autonomy is required. Essential elements of such operational autonomy are: (1) automatic mapping of an operational area from different vantage points (including vehicle health monitoring); (2) automatic feature extraction and target/region-of-interest identification within the mapped operational area; and (3) automatic target prioritization for close-up examination. These requirements imply the optimal deployment of MEMS sensors and sensor platforms, sensor fusion, and sensor interoperability

    Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    Get PDF
    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any userdefined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (µAVS^2) for real-time image processing. Truly standalone, µAVS^2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS- 232 and Ethernet interfaces. Image processing filters on µAVS^2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. µAVS^2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, µAVS^2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, µAVS^2 can easily be reconfigured for other prosthetic systems. Testing of µAVS^2 with actual retinal implant carriers is envisioned in the near future

    Multi-rover testbed for teleconducted and autonomous surveillance, reconnaissance, and exploration

    Get PDF
    At Caltech's Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu) an outdoor multi-rover testbed has been developed that allows for near real-time interactive or automatic control from anywhere in the world via the Internet. It enables the implementation, field-testing, and validation of algorithms/software and strategies for navigation, exploration, feature extraction, anomaly detection, and target prioritization with applications in planetary exploration, security surveillance, reconnaissance of disaster areas, military reconnaissance, and delivery of lethal force such as explosives for urban warfare. Several rover platforms have been developed, enabling testing of cooperative multi-rover scenarios (e.g., inter-rover communication/coordination) and distributed exploration of operational areas

    Smart ophthalmics: the future in tele-ophthalmology has arrived

    Get PDF
    Smart Ophthalmics^© extends ophthalmic healthcare to people who operate/live in austere environments (e.g., military, third world, natural disaster), or are geographically dispersed (e.g., rural populations), where time, cost, and the possibility of travel/transportation make access to even adequate medical care difficult, if at all possible. Operators attach optical devices that act as ophthalmic examination extensions to smartphones and run custom apps to perform examinations of specific areas of the eye. The smartphone apps submit over wireless networks the collected examination data to a smart remote expert system, which provides in-depth medical analyses that are sent back in near real-time to the operators for subsequent triage

    Automated Global Feature Analyzer - A Driver for Tier-Scalable Reconnaissance

    Get PDF
    For the purposes of space flight, reconnaissance field geologists have trained to become astronauts. However, the initial forays to Mars and other planetary bodies have been done by purely robotic craft. Therefore, training and equipping a robotic craft with the sensory and cognitive capabilities of a field geologist to form a science craft is a necessary prerequisite. Numerous steps are necessary in order for a science craft to be able to map, analyze, and characterize a geologic field site, as well as effectively formulate working hypotheses. We report on the continued development of the integrated software system AGFA: automated global feature analyzerreg, originated by Fink at Caltech and his collaborators in 2001. AGFA is an automatic and feature-driven target characterization system that operates in an imaged operational area, such as a geologic field site on a remote planetary surface. AGFA performs automated target identification and detection through segmentation, providing for feature extraction, classification, and prioritization within mapped or imaged operational areas at different length scales and resolutions, depending on the vantage point (e.g., spaceborne, airborne, or ground). AGFA extracts features such as target size, color, albedo, vesicularity, and angularity. Based on the extracted features, AGFA summarizes the mapped operational area numerically and flags targets of "interest", i.e., targets that exhibit sufficient anomaly within the feature space. AGFA enables automated science analysis aboard robotic spacecraft, and, embedded in tier-scalable reconnaissance mission architectures, is a driver of future intelligent and autonomous robotic planetary exploration

    Systems design of a high resolution retinal prosthesis

    Get PDF
    Simulations of artificial vision suggest that 1000 electrodes may be required to restore vision to individuals with diseases of the outer retina. In order to achieve such an implant, new technology is needed, since the state-of-the-art implantable neural stimulator has at most 22 contacts with neural tissue. Considerable progress has been made towards that goal with the development of image processing, microelectronics, and polymer based MEMS. An image processing system has been realized that is capable of real-time implementation of image decimation and filtering (for example, edge detection). Application specific integrated circuits (ASICs) have been designed and tested to demonstrate closed loop power control and efficient microstimulation. A novel packaging process has been developed that is capable of simultaneously forming communication coils, interconnects, and stimulating electrodes
    corecore