2,834 research outputs found

    A Knowledge based segmentation algorithm for enhanced recognition of handwritten courtesy amounts

    Get PDF
    "March 1994."Includes bibliographical references (p. [23]-[24]).Supported by the Productivity From Information Technology (PROFIT) Research Initiative at MIT.Karim Hussein ... [et al.

    A unified method for augmented incremental recognition of online handwritten Japanese and English text

    Get PDF
    We present a unifed method to augmented incremental recognition for online handwritten Japanese and English text, which is used for busy or on-the-fly recognition while writing, and lazy or delayed recognition after writing, without incurring long waiting times. It extends the local context for segmentation and recognition to a range of recent strokes called "segmentation scope" and "recognition scop", respectively. The recognition scope is inside of the segmentation scope. The augmented incremental recognition triggers recognition at every several recent strokes, updates the segmentation and recognition candidate lattice, and searches over the lattice for the best result incrementally. It also incorporates three techniques. The frst is to reuse the segmentation and recognition candidate lattice in the previous recognition scope for the current recognition scope. The second is to fx undecided segmentation points if they are stable between character/word patterns. The third is to skip recognition of partial candidate character/word patterns. The augmented incremental method includes the case of triggering recognition at every new stroke with the above-mentioned techniques. Experiments conducted on TUAT-Kondate and IAM online database show its superiority to batch recognition (recognizing text at one time) and pure incremental recognition (recognizing text at every input stroke) in processing time, waiting time, and recognition accuracy

    Building Machines That Learn and Think Like People

    Get PDF
    Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build causal models of the world that support explanation and understanding, rather than merely solving pattern recognition problems; (b) ground learning in intuitive theories of physics and psychology, to support and enrich the knowledge that is learned; and (c) harness compositionality and learning-to-learn to rapidly acquire and generalize knowledge to new tasks and situations. We suggest concrete challenges and promising routes towards these goals that can combine the strengths of recent neural network advances with more structured cognitive models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary proposals (until Nov. 22, 2016). https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar

    English character recognition algorithm by improving the weights of MLP neural network with dragonfly algorithm

    Get PDF
    Character Recognition (CR) is taken into consideration for years. Meanwhile, the neural network plays an important role in recognizing handwritten characters. Many character identification reports have been publishing in English, but still the minimum training timing and high accuracy of handwriting English symbols and characters by utilizing a method of neural networks are represents as open problems. Therefore, creating a character recognition system manually and automatically is very important. In this research, an attempt has been done to incubate an automatic symbols and character system for recognition for English with minimum training and a very high recognition accuracy and classification timing. In the proposed idea for improving the weights of the MLP neural network method in the process of teaching and learning character recognition, the dragonfly optimization algorithm has been used. The innovation of the proposed detection system is that with a combination of dragonfly optimization technique and MLP neural networks, the precisions of the system are recovered, and the computing time is minimized. The approach which was used in this study to identify English characters has high accuracy and minimum training time

    Neuromorphic Systems for Pattern Recognition and Uav Trajectory Planning

    Get PDF
    Detection and control are two essential components in an intelligent system. This thesis investigates novel techniques in both areas with a focus on the applications of handwritten text recognition and UAV flight control. Recognizing handwritten texts is a challenging task due to many different writing styles and lack of clear boundary between adjacent characters. The difficulty is greatly increased if the detection algorithms is solely based on pattern matching without information of dynamics of handwriting trajectories. Motivated by the aforementioned challenges, this thesis first investigates the pattern recognition problem. We use offline handwritten texts recognition as a case study to explore the performance of a recurrent belief propagation model. We first develop a probabilistic inference network to post process the recognition results of deep Convolutional Neural Network (CNN) (e.g. LeNet) and collect individual characters to form words. The output of the inference network is a set of words and their probability. A series of post processing and improvement techniques are then introduced to further increase the recognition accuracy. We study the performance of proposed model through various comparisons. The results show that it significantly improves the accuracy by correcting deletion, insertion and replacement errors, which are the main sources of invalid candidate words. Deep Reinforcement Learning (DRL) has widely been applied to control the autonomous systems because it provides solutions for various complex decision-making tasks that previously could not be solved solely with deep learning. To enable autonomous Unmanned Aerial Vehicles (UAV), this thesis presents a two-level trajectory planning framework for UAVs in an indoor environment. A sequence of waypoints is selected at the higher-level, which leads the UAV from its current position to the destination. At the lower-level, an optimal trajectory is generated analytically between each pair of adjacent waypoints. The goal of trajectory generation is to maintain the stability of the UAV, and the goal of the waypoints planning is to select waypoints with the lowest control thrust throughout the entire trip while avoiding collisions with obstacles. The entire framework is implemented using DRL, which learns the highly complicated and nonlinear interaction between those two levels, and the impact from the environment. Given the pre-planned trajectory, this thesis further presents an actor-critic reinforcement learning framework that realizes continuous trajectory control of the UAV through a set of desired waypoints. We construct a deep neural network and develop reinforcement learning for better trajectory tracking. In addition, Field Programmable Gate Arrays (FPGA) based hardware acceleration is designed for energy efficient real-time control. If we are to integrate the trajectory planning model onto a UAV system for real-time on-board planning, a key challenge is how to deliver required performance under strict memory and computational constraints. Techniques that compress Deep Neural Network (DNN) models attract our attention because they allow optimized neural network models to be efficiently deployed on platforms with limited energy and storage capacity. However, conventional model compression techniques prune the DNN after it is fully trained, which is very time-consuming especially when the model is trained using DRL. To overcome the limitation, we present an early phase integrated neural network weight compression system for DRL based waypoints planning. By applying pruning at an early phase, the compression of the DRL model can be realized without significant overhead in training. By tightly integrating pruning and retraining at the early phase, we achieve a higher model compression rate, reduce more memory and computing complexity, and improve the success rate compared to the original work

    Concurrent evolution of feature extractors and modular artificial neural networks

    Get PDF
    Artificial Neural Networks (ANNs) are commonly used in both academia and industry as a solution to challenges in the pattern recognition domain. However, there are two problems that must be addressed before an ANN can be successfully applied to a given recognition task: ANN customization and data pre-processing. First, ANNs require customization for each specific application. Although the underlying mathematics of ANNs is well understood, customization based on theoretical analysis is impractical because of the complex interrelationship between ANN behavior and the problem domain. On the other hand, an empirical approach to the task of customization can be successful with the selection of an appropriate test domain. However, this latter approach is computationally intensive, especially due to the many variables that can be adjusted within the system. Additionally, it is subject to the limitations of the selected search algorithm used to find the optimal solution. Second, data pre-processing (feature extraction) is almost always necessary in order to organize and minimize the input data, thereby optimizing ANN performance. Not only is it difficult to know what and how many features to extract from the data, but it is also challenging to find the right balance between the computational requirements for the preprocessing algorithm versus the ANN itself. Furthermore, the task of developing an appropriate pre-processing algorithm usually requires expert knowledge of the problem domain, which may not always be available. This paper contends that the concurrent evolution of ANNs and data pre-processors allows the design of highly accurate recognition networks without the need for expert knowledge in the application domain. To this end, a novel method for evolving customized ANNs with correlated feature extractors was designed and tested. This method involves the use of concurrent evolutionary processes (CEPs) as a mechanism to search the space of recognition networks. In a series of controlled experiments the CEP was applied to the digit recognition domain to show that the efficacy of this method is in-line with results seen in other digit recognition research, but without the need for expert knowledge in image processing techniques for digit recognition
    corecore