31 research outputs found

    Methods and Apparatus for Autonomous Robotic Control

    Get PDF
    Sensory processing of visual, auditory, and other sensor information (e.g., visual imagery, LIDAR, RADAR) is conventionally based on "stovepiped," or isolated processing, with little interactions between modules. Biological systems, on the other hand, fuse multi-sensory information to identify nearby objects of interest more quickly, more efficiently, and with higher signal-to-noise ratios. Similarly, examples of the OpenSense technology disclosed herein use neurally inspired processing to identify and locate objects in a robot's environment. This enables the robot to navigate its environment more quickly and with lower computational and power requirements

    ASSESSMENT OF GLIOMA RESPONSE TO 8 GY RADIOTHERAPY ON MULTIPLE MRI BIOMARKERS BY APPLYING IMAGE SEGMENTATION ALGORITHM

    Get PDF
    Magnetic Resonance Imaging (MRI) is playing a significant role in assessment of treatment response for a variety of diseases. To investigate multi-parametric MRI biomarkers for the assessment of glioma response to radiotherapy, we made comparison among different MRI parameters. In the tumor extraction step, we compare the manual segmentation, automatic and semi-automatic methods based on region of interest (ROI) results. In our experiments, thirteen nude rats injected with U87 tumor were irradiated by 8 Gy radiation dose. All MRI were performed on a 4.7 T animal scanner at time points of pre-radiation, 1 day, 4 days and 8 days post-radiation. Multi-parametric MRI signals of the tumors were compared in quantitative. Two experts performed manual and semi-automatic methods on tumor extraction on Amide Proton Transfer-weighted (APTw) maps. The results shows that average Apparent Diffusion Coefficient (ADC) intensity of ROI had a great increase during the post-radiation. The relative blood flow values (tumor vs. normal contralateral side of brain) had a continuous decrease after radiotherapy. Similarly, APTw signals intensity decreased at all time after radiotherapy. Semi-automatic method gave a more stable ROI extraction result on APTw maps than manual segmentation without rater dependence, and with less time consumption. In conclusion, ADC, blood flow, APTw are all helpful signals in assessing glioma response to radiotherapy. Also, semi-automatic method on ROI extraction showed higher efficiency and stability than manual method

    Proceedings of the Third International Workshop on Neural Networks and Fuzzy Logic, volume 2

    Get PDF
    Papers presented at the Neural Networks and Fuzzy Logic Workshop sponsored by the National Aeronautics and Space Administration and cosponsored by the University of Houston, Clear Lake, held 1-3 Jun. 1992 at the Lyndon B. Johnson Space Center in Houston, Texas are included. During the three days approximately 50 papers were presented. Technical topics addressed included adaptive systems; learning algorithms; network architectures; vision; robotics; neurobiological connections; speech recognition and synthesis; fuzzy set theory and application, control and dynamics processing; space applications; fuzzy logic and neural network computers; approximate reasoning; and multiobject decision making

    Automatic facial recognition based on facial feature analysis

    Get PDF

    Image compression techniques using vector quantization

    Get PDF

    Automatic human face detection in color images

    Get PDF
    Automatic human face detection in digital image has been an active area of research over the past decade. Among its numerous applications, face detection plays a key role in face recognition system for biometric personal identification, face tracking for intelligent human computer interface (HCI), and face segmentation for object-based video coding. Despite significant progress in the field in recent years, detecting human faces in unconstrained and complex images remains a challenging problem in computer vision. An automatic system that possesses a similar capability as the human vision system in detecting faces is still a far-reaching goal. This thesis focuses on the problem of detecting human laces in color images. Although many early face detection algorithms were designed to work on gray-scale Images, strong evidence exists to suggest face detection can be done more efficiently by taking into account color characteristics of the human face. In this thesis, we present a complete and systematic face detection algorithm that combines the strengths of both analytic and holistic approaches to face detection. The algorithm is developed to detect quasi-frontal faces in complex color Images. This face class, which represents typical detection scenarios in most practical applications of face detection, covers a wide range of face poses Including all in-plane rotations and some out-of-plane rotations. The algorithm is organized into a number of cascading stages including skin region segmentation, face candidate selection, and face verification. In each of these stages, various visual cues are utilized to narrow the search space for faces. In this thesis, we present a comprehensive analysis of skin detection using color pixel classification, and the effects of factors such as the color space, color classification algorithm on segmentation performance. We also propose a novel and efficient face candidate selection technique that is based on color-based eye region detection and a geometric face model. This candidate selection technique eliminates the computation-intensive step of window scanning often employed In holistic face detection, and simplifies the task of detecting rotated faces. Besides various heuristic techniques for face candidate verification, we developface/nonface classifiers based on the naive Bayesian model, and investigate three feature extraction schemes, namely intensity, projection on face subspace and edge-based. Techniques for improving face/nonface classification are also proposed, including bootstrapping, classifier combination and using contextual information. On a test set of face and nonface patterns, the combination of three Bayesian classifiers has a correct detection rate of 98.6% at a false positive rate of 10%. Extensive testing results have shown that the proposed face detector achieves good performance in terms of both detection rate and alignment between the detected faces and the true faces. On a test set of 200 images containing 231 faces taken from the ECU face detection database, the proposed face detector has a correct detection rate of 90.04% and makes 10 false detections. We have found that the proposed face detector is more robust In detecting in-plane rotated laces, compared to existing face detectors. +D2

    Automatic caption generation for content-based image information retrieval.

    Get PDF
    Ma, Ka Ho.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 82-87).Abstract and appendix in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Objective of This Research --- p.4Chapter 1.2 --- Organization of This Thesis --- p.5Chapter 2 --- Background --- p.6Chapter 2.1 --- Textual - Image Query Approach --- p.7Chapter 2.1.1 --- Yahoo! Image Surfer --- p.7Chapter 2.1.2 --- QBIC (Query By Image Content) --- p.8Chapter 2.2 --- Feature-based Approach --- p.9Chapter 2.2.1 --- Texture Thesaurus for Aerial Photos --- p.9Chapter 2.3 --- Caption-aided Approach --- p.10Chapter 2.3.1 --- PICTION (Picture and capTION) --- p.10Chapter 2.3.2 --- MARIE --- p.11Chapter 2.4 --- Summary --- p.11Chapter 3 --- Caption Generation --- p.13Chapter 3.1 --- System Architecture --- p.13Chapter 3.2 --- Domain Pool --- p.15Chapter 3.3 --- Image Feature Extraction --- p.16Chapter 3.3.1 --- Preprocessing --- p.16Chapter 3.3.2 --- Image Segmentation --- p.17Chapter 3.4 --- Classification --- p.24Chapter 3.4.1 --- Self-Organizing Map (SOM) --- p.26Chapter 3.4.2 --- Learning Vector Quantization (LVQ) --- p.28Chapter 3.4.3 --- Output of the Classification --- p.30Chapter 3.5 --- Caption Generation --- p.30Chapter 3.5.1 --- Phase One: Logical Form Generation --- p.31Chapter 3.5.2 --- Phase Two: Simplification --- p.32Chapter 3.5.3 --- Phase Three: Captioning --- p.33Chapter 3.6 --- Summary --- p.35Chapter 4 --- Query Examples --- p.37Chapter 4.1 --- Query Types --- p.37Chapter 4.1.1 --- Non-content-based Retrieval --- p.38Chapter 4.1.2 --- Content-based Retrieval --- p.38Chapter 4.2 --- Hierarchy Graph --- p.41Chapter 4.3 --- Matching --- p.42Chapter 4.4 --- Summary --- p.48Chapter 5 --- Evaluation --- p.49Chapter 5.1 --- Experimental Set-up --- p.50Chapter 5.2 --- Experimental Results --- p.51Chapter 5.2.1 --- Segmentation --- p.51Chapter 5.2.2 --- Classification --- p.53Chapter 5.2.3 --- Captioning --- p.55Chapter 5.2.4 --- Overall Performance --- p.56Chapter 5.3 --- Observations --- p.57Chapter 5.4 --- Summary --- p.58Chapter 6 --- Another Application --- p.59Chapter 6.1 --- Police Force Crimes Investigation --- p.59Chapter 6.1.1 --- Image Feature Extraction --- p.61Chapter 6.1.2 --- Caption Generation --- p.64Chapter 6.1.3 --- Query --- p.66Chapter 6.2 --- An Illustrative Example --- p.68Chapter 6.3 --- Summary --- p.72Chapter 7 --- Conclusions --- p.74Chapter 7.1 --- Contribution --- p.77Chapter 7.2 --- Future Work --- p.78Bibliography --- p.81Appendices --- p.88Chapter A --- Segmentation Result Under Different Parametes --- p.89Chapter B --- Segmentation Time of 10 Randomly Selected Images --- p.90Chapter C --- Sample Captions --- p.9

    Intelligent sensing for robot mapping and simultaneous human localization and activity recognition

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2011.Thesis (Ph. D.) -- Bilkent University, 2011.Includes bibliographical references leaves 147-163.We consider three different problems in two different sensing domains, namely ultrasonic sensing and inertial sensing. Since the applications considered in each domain are inherently different, this thesis is composed of two main parts. The approach common to the two parts is that raw data acquired from simple sensors is processed intelligently to extract useful information about the environment. In the first part, we employ active snake contours and Kohonen’s selforganizing feature maps (SOMs) for representing and evaluating discrete point maps of indoor environments efficiently and compactly. We develop a generic error criterion for comparing two different sets of points based on the Euclidean distance measure. The point sets can be chosen as (i) two different sets of map points acquired with different mapping techniques or different sensing modalities, (ii) two sets of fitted curve points to maps extracted by different mapping techniques or sensing modalities, or (iii) a set of extracted map points and a set of fitted curve points. The error criterion makes it possible to compare the accuracy of maps obtained with different techniques among themselves, as well as with an absolute reference. We optimize the parameters of active snake contours and SOMs using uniform sampling of the parameter space and particle swarm optimization. A demonstrative example from ultrasonic mapping is given based on experimental data and compared with a very accurate laser map, considered an absolute reference. Both techniques can fill the erroneous gaps in discrete point maps. Snake curve fitting results in more accurate maps than SOMs because it is more robust to outliers. The two methods and the error criterion are sufficiently general that they can also be applied to discrete point maps acquired with other mapping techniques and other sensing modalities. In the second part, we use body-worn inertial/magnetic sensor units for recognition of daily and sports activities, as well as for human localization in GPSdenied environments. Each sensor unit comprises a tri-axial gyroscope, a tri-axial accelerometer, and a tri-axial magnetometer. The error characteristics of the sensors are modeled using the Allan variance technique, and the parameters of lowand high-frequency error components are estimated. Then, we provide a comparative study on the different techniques of classifying human activities that are performed using body-worn miniature inertial and magnetic sensors. Human activities are classified using five sensor units worn on the chest, the arms, and the legs. We compute a large number of features extracted from the sensor data, and reduce these features using both Principal Components Analysis (PCA) and sequential forward feature selection (SFFS). We consider eight different pattern recognition techniques and provide a comparison in terms of the correct classification rates, computational costs, and their training and storage requirements. Results with sensors mounted on various locations on the body are also provided. The results indicate that if the system is trained by the data of an individual person, it is possible to obtain over 99% correct classification rates with a simple quadratic classifier such as the Bayesian decision method. However, if the training data of that person are not available beforehand, one has to resort to more complex classifiers with an expected correct classification rate of about 85%. We also consider the human localization problem using body-worn inertial/ magnetic sensors. Inertial sensors are characterized by drift error caused by the integration of their rate output to get position information. Because of this drift, the position and orientation data obtained from inertial sensor signals are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user provides information about position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing activity recognition simultaneously with localization, one can detect the activity context switches and use the corresponding position information as position updates in the localization filter. The localization filter also involves a smoother, which combines the two estimates obtained by running the zero-velocity update (ZUPT) algorithm both forward and backward in time. We performed experiments with eight subjects in an indoor and an outdoor environment involving “walking,” “turning,” and “standing” activities. Using the error criterion in the first part of the thesis, we show that the position errors can be decreased by about 85% on the average. We also present the results of a 3-D experiment performed in a realistic indoor environment and demonstrate that it is possible to achieve over 90% error reduction in position by performing activity recognition simultaneously with localization.Altun, KeremPh.D
    corecore