Panacea is a modular system which incorporates a steerable sensor into an existing neural network driving system, ALVINN. A fixed camera cannot see the road when it makes sharp bends. For a vision system that builds a map of the road, it is straightforward to point the camera down the road; but ALVINN directly outputs a steering command without generating an intermediate road representation. Insight from the training scheme used in ALVINN, however, provides an interpretation of the steering command in terms of the road geometry and appropriate camera pointing strategies. Tests on the Carnegie Mellon Navlab II with a steerable camera have shown that the system significantly improves ALVINN's performance, particularly in situations requiring sharp turns and quick responses. The Panacea active camera control system illustrates a trend in the ALVINN project away from treating neural networks as simple black box function approximators. Instead, the neural network's behavior is modeled symbo..
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.