73 research outputs found

    Dynamic relevance: vision-based focus of attention using artificial neural networks

    Get PDF
    AbstractThis paper presents a method for ascertaining the relevance of inputs in vision-based tasks by exploiting temporal coherence and predictability. In contrast to the tasks explored in many previous relevance experiments, the class of tasks examined in this study is one in which relevance is a time-varying function of the previous and current inputs. The method proposed in this paper dynamically allocates relevance to inputs by using expectations of their future values. As a model of the task is learned, the model is simultaneously extended to create task-specific predictions of the future values of inputs. Inputs that are not relevant, and therefore not accounted for in the model, will not be predicted accurately. These inputs can be de-emphasized, and, in turn, a new, improved, model of the task created. The techniques presented in this paper have been successfully applied to the vision-based autonomous control of a land vehicle, vision-based hand tracking in cluttered scenes, and the detection of faults in the plasma-etch step of semiconductor wafers

    Non-intrusive gaze tracking using artificial neural networks

    Get PDF
    We have developed an artificial neural network based gaze tracking system which can be customized to individual users. A three layer feed forward network, trained with standard error back propagation, is used to determine the position of a user’s gaze from the appearance of the user’s eye. Unlike other gaze trackers, which normally require the user to wear cumbersome headgear, or to use a chin rest to ensure head immobility, our system is entirely non-intrusive. Currently, the best intrusive gaze tracking systems are accurate to approximately 0.75 degrees. In our experiments, we have been able to achieve an accuracy of 1.5 degrees, while allowing head mobility. In its current implementation, our system works at 15 hz. In this paper we present an empirical analysis of the performance of a large number of artificial neural network architectures for this task. Suggestions for further explorations for neurally based gaze trackers are presented, and are related to other similar artificial neural network applications such as autonomous road following

    Efficient training of artificial neural networks for autonomous navigation

    No full text
    The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN is a back-propagation network designed to drive the CMU Navlab, a modified Chevy van. This paper describes the training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's reactions. Using these techniques ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 20 miles per hour

    Knowledge-based Training of Artificial Neural Networks for Autonomous Robot Driving

    No full text
    Many real world problems quirea degree of flexibility that is to achieve using hand algorithms. One such domain is vision-based autonomous driving. In this task, the dual challenges of a constantly changing environment coupled with a real processing constrain the flexibility and of a machine learning system essential. This describes just such a learning system, called (Autonomous Land Vehicle In a Neural Network). It presents the neural network architecture and training techniques that allow to drive in a variety of including singlelane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on- and road environments, at speeds of up to 55 miles hour

    >

    No full text

    Machine Learning, 28, 41–75 (1997)

    No full text
    Abstract. Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with k-nearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with case-based methods like k-nearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on real-world problems
    • …
    corecore