12 research outputs found

    The RPM3D project: 3D Kinematics for Remote Patient Monitoring

    Full text link
    This project explores the feasibility of remote patient monitoring based on the analysis of 3D movements captured with smartwatches. We base our analysis on the Kinematic Theory of Rapid Human Movement. We have validated our research in a real case scenario for stroke rehabilitation at the Guttmann Institute5 (neurorehabilitation hospital), showing promising results. Our work could have a great impact in remote healthcare applications, improving the medical efficiency and reducing the healthcare costs. Future steps include more clinical validation, developing multi-modal analysis architectures (analysing data from sensors, images, audio, etc.), and exploring the application of our technology to monitor other neurodegenerative diseases

    Text Baseline Detection, a single page trained system

    Full text link
    [EN] Nowadays, there are a lot of page images available and the scanning process is quite well resolved and can be done industrially. On the other hand, HTR systems can only deal with single text line images. Segmenting pages into single text line images is a very expensive process which has traditionally been done manually. This is a bottleneck which is holding back any massive industrial document processing. A baseline detection method will be presented here'. The initial problem is reformulated as a clustering problem over a set of interest points. Its design aim is to be fast and to resist the noise artifacts that usually appear in historical manuscripts: variable interline spacing, the overlapping and touching of words in adjacent lines, humidity spots, etc. Results show that this system can be used to massively detect where the text lines are in pages. Highlight: This system reached second place in the ICDAR 2017 Competition on Baseline Detection (see Table 1). (C) 2019 Elsevier Ltd. All rights reserved.This work was partially supported by the project Carabela (PR[17]_HUM_D4_0059), sponsored by the programme "Ayudas a Equipos de Investigacion en Humanidades Digitales" of the BBVA Foundacion.Pastor Gadea, M. (2019). Text Baseline Detection, a single page trained system. Pattern Recognition. 94:149-161. https://doi.org/10.1016/j.patcog.2019.05.031S1491619

    Synthesizing Robotic Handwriting Motion by Learning from Human Demonstrations

    Get PDF
    This paper contributes a novel framework that enables a robotic agent to efficiently learn and synthesize believable handwriting motion. We situate the framework as a foundation with the goal of allowing children to observe, correct and engage with the robot to learn themselves the handwriting skill. The framework adapts the principle behind ensemble methods - where improved performance is obtained by combining the output of multiple simple algorithms - in an inverse optimal control problem. This integration addresses the challenges of rapid extraction and representation of multiple-mode motion trajectories, with the cost forms which are transferable and interpretable in the development of the robot compliance control. It also introduces the incorporation of a human movement inspired feature, which provides intuitive motion modulation to generalize the synthesis with poor robotic written samples for children to identify and correct. We present the results on the success of synthesizing a variety of natural-looking motion samples based upon the learned cost functions. The framework is validated by a user study, where the synthesized dynamical motion is shown to be hard to distinguish from the real human handwriting

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions

    Combination of deep neural networks and logical rules for record segmentation in historical handwritten registers using few examples

    Get PDF
    International audienceThis work focuses on the layout analysis of historical handwritten registers, in which local religious ceremonies were recorded. The aim of this work is to delimit each record in these registers. To this end, two approaches are proposed. Firstly, object detection networks are explored, as three state-of-the-art architectures are compared. Further experiments are then conducted on Mask R-CNN, as it yields the best performance. Secondly, we introduce and investigate Deep Syntax, a hybrid system that takes advantages of recurrent patterns to delimit each record, by combining ushaped networks and logical rules. Finally, these two approaches are evaluated on 3708 French records (16-18th centuries), as well as on the Esposalles public database, containing 253 Spanish records (17th century). While both systems perform well on homogeneous documents, we observe a significant drop in performance with Mask R-CNN on heterogeneous documents, especially when trained on a non-representative subset. By contrast, Deep Syntax relies on steady patterns, and is therefore able to process a wider range of documents with less training data. Not only Deep Syntax produces 15% more match configurations and reduces the ZoneMap surface error metric by 30% when both systems are trained on 120 images, but it also outperforms Mask R-CNN when trained on a database three times smaller. As Deep Syntax generalizes better, we believe it can be used in the context of massive document processing, as collecting and annotating a sufficiently large and representative set of training data is not always achievable

    Arbitrary Keyword Spotting in Handwritten Documents

    Get PDF
    Despite the existence of electronic media in today’s world, a considerable amount of written communications is in paper form such as books, bank cheques, contracts, etc. There is an increasing demand for the automation of information extraction, classification, search, and retrieval of documents. The goal of this research is to develop a complete methodology for the spotting of arbitrary keywords in handwritten document images. We propose a top-down approach to the spotting of keywords in document images. Our approach is composed of two major steps: segmentation and decision. In the former, we generate the word hypotheses. In the latter, we decide whether a generated word hypothesis is a specific keyword or not. We carry out the decision step through a two-level classification where first, we assign an input image to a keyword or non-keyword class; and then transcribe the image if it is passed as a keyword. By reducing the problem from the image domain to the text domain, we do not only address the search problem in handwritten documents, but also the classification and retrieval, without the need for the transcription of the whole document image. The main contribution of this thesis is the development of a generalized minimum edit distance for handwritten words, and to prove that this distance is equivalent to an Ergodic Hidden Markov Model (EHMM). To the best of our knowledge, this work is the first to present an exact 2D model for the temporal information in handwriting while satisfying practical constraints. Some other contributions of this research include: 1) removal of page margins based on corner detection in projection profiles; 2) removal of noise patterns in handwritten images using expectation maximization and fuzzy inference systems; 3) extraction of text lines based on fast Fourier-based steerable filtering; 4) segmentation of characters based on skeletal graphs; and 5) merging of broken characters based on graph partitioning. Our experiments with a benchmark database of handwritten English documents and a real-world collection of handwritten French documents indicate that, even without any word/document-level training, our results are comparable with two state-of-the-art word spotting systems for English and French documents

    Adaptive systems for hidden Markov model-based pattern recognition systems

    Get PDF
    This thesis focuses on the design of adaptive systems (AS) for dealing with complex pattern recognition problems. Pattern recognition systems usually rely on static knowledge to define a configuration to be used during their entire lifespan. However, some systems need to adapt to knowledge that may not have been available in the design phase. For this reason, AS are designed to tailor a baseline pattern recognition system as required, and in an automated fashion, in both the learning and generalization phases. These AS are defined here, using hidden Markov model (HMM)-based classifiers as a case study. We first evaluate incremental learning algorithms for the estimation of HMM parameters. The main goal is to find incremental learning algorithms that perform as well as the traditional batch learning techniques, but incorporate the advantages of incremental learning for designing complex pattern recognition systems. Experiments on handwritten characters have shown that a proposed variant of the Ensemble Training algorithm, which employs ensembles of HMMs, can lead to very promising results. Furthermore, the use of a validation dataset demonstrates that it is possible to achieve better performances than those of batch learning. We then propose a new approach for the dynamic selection of ensembles of classifiers. Based on the concept called “multistage organizations”, the main objective of which is to define a multi-layer fusion function that adapts to individual recognition problems, we propose dynamic multistage organization (DMO), which defines the best multistage structure for each test sample. By extending Dos Santos et al’s approach, we propose two implementations for DMO, namely DSAm and DSAc. DSAm considers a set of dynamic selection functions to generalize a DMO structure, and DSAc uses contextual information, represented by the output profiles computed from the validation dataset. The experimental evaluation, considering both small and large datasets, demonstrates that DSAc outperforms DSAm on most problems. This shows that the use of contextual information can result in better performance than other methods. The performance of DSAc can also be enhanced in incremental learning. However, the most important observation, supported by additional experiments, is that dynamic selection is generally preferred over static approaches when the recognition problem presents a high level of uncertainty. Finally, we propose the LoGID (Local and Global Incremental Learning for Dynamic Selection) framework, the main goal of which is to adapt hidden Markov model-based pattern recognition systems in both the learning and generalization phases. Given that the baseline system is composed of a pool of base classifiers, adaptation during generalization is conducted by dynamically selecting the best members of this pool to recognize each test sample. Dynamic selection is performed by the proposed K-nearest output profiles algorithm, while adaptation during learning consists of gradually updating the knowledge embedded in the base classifiers by processing previously unobserved data. This phase employs two types of incremental learning: local and global. Local incremental learning involves updating the pool of base classifiers by adding new members to this set. These new members are created with the Learn++ algorithm. In contrast, global incremental learning consists of updating the set of output profiles used during generalization. The proposed framework has been evaluated on a diversified set of databases. The results indicate that LoGID is promising. In most databases, the recognition rates achieved by the proposed method are higher than those achieved by other state-of-the-art approaches, such as batch learning. Furthermore, the simulated incremental learning setting demonstrates that LoGID can effectively improve the performance of systems created with small training sets as more data are observed over time

    DOCUMENT AND NATURAL IMAGE APPLICATIONS OF DEEP LEARNING

    Get PDF
    A tremendous amount of digital visual data is being collected every day, and we need efficient and effective algorithms to extract useful information from that data. Considering the complexity of visual data and the expense of human labor, we expect algorithms to have enhanced generalization capability and depend less on domain knowledge. While many topics in computer vision have benefited from machine learning, some document analysis and image quality assessment problems still have not found the best way to utilize it. In the context of document images, a compelling need exists for reliable methods to categorize and extract key information from captured images. In natural image content analysis, accurate quality assessment has become a critical component for many applications. Most current approaches, however, rely on the heuristics designed by human observations on severely limited data. These approaches typically work only on specific types of images and are hard to generalize on complex data from real applications. This dissertation looks to address the challenges of processing heterogeneous visual data by applying effective learning methods that directly model the data with minimal preprocessing and feature engineering. We focus on three important problems - text line detection, document image categorization, and image quality assessment. The data we work on typically contains unconstrained layouts, styles, or noise, which resemble the real data from applications. First, we present a graph-based method, learning the line structure from training data for text line segmentation in handwritten document images, and a general framework to detect multi-oriented scene text lines using Higher-Order Correlation Clustering. Our method depends less on domain knowledge and is robust to variations in fonts or languages. Second, we introduce a general approach for document image genre classification using Convolutional Neural Networks (CNN). The introduction of CNNs for document image genre classification largely reduces the needs of hand-crafted features or domain knowledge. Third, we present our CNN based methods to general-purpose No-Reference Image Quality Assessment (NR-IQA). Our methods bridge the gap between NR-IQA and CNN and opens the door to a broad range of deep learning methods. With excellent local quality estimation ability, our methods demonstrate the state of art performance on both distortion identification and quality estimation

    New methods, techniques and applications for sketch recognition

    Get PDF
    2012-2013The use of diagrams is common in various disciplines. Typical examples include maps, line graphs, bar charts, engineering blueprints, architects’ sketches, hand drawn schematics, etc.. In general, diagrams can be created either by using pen and paper, or by using specific computer programs. These programs provide functions to facilitate the creation of the diagram, such as copy-and-paste, but the classic WIMP interfaces they use are unnatural when compared to pen and paper. Indeed, it is not rare that a designer prefers to use pen and paper at the beginning of the design, and then transfer the diagram to the computer later. To avoid this double step, a solution is to allow users to sketch directly on the computer. This requires both specific hardware and sketch recognition based software. As regards hardware, many pen/touch based devices such as tablets, smartphones, interactive boards and tables, etc. are available today, also at reasonable costs. Sketch recognition is needed when the sketch must be processed and not considered as a simple image and it is crucial to the success of this new modality of interaction. It is a difficult problem due to the inherent imprecision and ambiguity of a freehand drawing and to the many domains of applications. The aim of this thesis is to propose new methods and applications regarding the sketch recognition. The presentation of the results is divided into several contributions, facing problems such as corner detection, sketched symbol recognition and autocompletion, graphical context detection, sketched Euler diagram interpretation. The first contribution regards the problem of detecting the corners present in a stroke. Corner detection is often performed during preprocessing to segment a stroke in single simple geometric primitives such as lines or curves. The corner recognizer proposed in this thesis, RankFrag, is inspired by the method proposed by Ouyang and Davis in 2011 and improves the accuracy percentages compared to other methods recently proposed in the literature. The second contribution is a new method to recognize multi-stroke hand drawn symbols, which is invariant with respect to scaling and supports symbol recognition independently from the number and order of strokes. The method is an adaptation of the algorithm proposed by Belongie et al. in 2002 to the case of sketched images. This is achieved by using stroke related information. The method has been evaluated on a set of more than 100 symbols from the Military Course of Action domain and the results show that the new recognizer outperforms the original one. The third contribution is a new method for recognizing multi-stroke partially hand drawn symbols which is invariant with respect to scale, and supports symbol recognition independently from the number and order of strokes. The recognition technique is based on subgraph isomorphism and exploits a novel spatial descriptor, based on polar histograms, to represent relations between two stroke primitives. The tests show that the approach gives a satisfactory recognition rate with partially drawn symbols, also with a very low level of drawing completion, and outperforms the existing approaches proposed in the literature. Furthermore, as an application, a system presenting a user interface to draw symbols and implementing the proposed autocompletion approach has been developed. Moreover a user study aimed at evaluating the human performance in hand drawn symbol autocompletion has been presented. Using the set of symbols from the Military Course of Action domain, the user study evaluates the conditions under which the users are willing to exploit the autocompletion functionality and those under which they can use it efficiently. The results show that the autocompletion functionality can be used in a profitable way, with a drawing time saving of about 18%. The fourth contribution regards the detection of the graphical context of hand drawn symbols, and in particular, the development of an approach for identifying attachment areas on sketched symbols. In the field of syntactic recognition of hand drawn visual languages, the recognition of the relations among graphical symbols is one of the first important tasks to be accomplished and is usually reduced to recognize the attachment areas of each symbol and the relations among them. The approach is independent from the method used to recognize symbols and assumes that the symbol has already been recognized. The approach is evaluated through a user study aimed at comparing the attachment areas detected by the system to those devised by the users. The results show that the system can identify attachment areas with a reasonable accuracy. The last contribution is EulerSketch, an interactive system for the sketching and interpretation of Euler diagrams (EDs). The interpretation of a hand drawn ED produces two types of text encodings of the ED topology called static code and ordered Gauss paragraph (OGP) code, and a further encoding of its regions. Given the topology of an ED expressed through static or OGP code, EulerSketch automatically generates a new topologically equivalent ED in its graphical representation. [edited by author]XII n.s

    A framework for ancient and machine-printed manuscripts categorization

    Get PDF
    Document image understanding (DIU) has attracted a lot of attention and became an of active fields of research. Although, the ultimate goal of DIU is extracting textual information of a document image, many steps are involved in a such a process such as categorization, segmentation and layout analysis. All of these steps are needed in order to obtain an accurate result from character recognition or word recognition of a document image. One of the important steps in DIU is document image categorization (DIC) that is needed in many situations such as document image written or printed in more than one script, font or language. This step provides useful information for recognition system and helps in reducing its error by allowing to incorporate a category-specific Optical Character Recognition (OCR) system or word recognition (WR) system. This research focuses on the problem of DIC in different categories of scripts, styles and languages and establishes a framework for flexible representation and feature extraction that can be adapted to many DIC problem. The current methods for DIC have many limitations and drawbacks that restrict the practical usage of these methods. We proposed an efficient framework for categorization of document image based on patch representation and Non-negative Matrix Factorization (NMF). This framework is flexible and can be adapted to different categorization problem. Many methods exist for script identification of document image but few of them addressed the problem in handwritten manuscripts and they have many limitations and drawbacks. Therefore, our first goal is to introduce a novel method for script identification of ancient manuscripts. The proposed method is based on patch representation in which the patches are extracted using skeleton map of a document images. This representation overcomes the limitation of the current methods about the fixed level of layout. The proposed feature extraction scheme based on Projective Non-negative Matrix Factorization (PNMF) is robust against noise and handwriting variation and can be used for different scripts. The proposed method has higher performance compared to state of the art methods and can be applied to different levels of layout. The current methods for font (style) identification are mostly proposed to be applied on machine-printed document image and many of them can only be used for a specific level of layout. Therefore, we proposed new method for font and style identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The images are represented by overlapping patches obtained from the foreground pixels. The position of these patches are set based on skeleton map to reduce the number of patches. Non-Negative Matrix Tri-Factorization is used to learn bases from each fonts (style) and then these bases are used to classify a new image based on minimum representation error. The proposed method can easily be extended to new fonts as the bases for each font are learned separately from the other fonts. This method is tested on two datasets of machine-printed and ancient manuscript and the results confirmed its performance compared to the state of the art methods. Finally, we proposed a novel method for language identification of printed and handwritten manuscripts based on patch representation and Non-negative Matrix Tri-Factorization (NMTF). The current methods for language identification are based on textual data obtained by OCR engine or images data through coding and comparing with textual data. The OCR based method needs lots of processing and the current image based method are not applicable to cursive scripts such as Arabic. In this work we introduced a new method for language identification of machine-printed and handwritten manuscripts based on patch representation and NMTF. The patch representation provides the component of the Arabic script (letters) that can not be extracted simply by segmentation methods. Then NMTF is used for dictionary learning and generating codebooks that will be used to represent document image with a histogram. The proposed method is tested on two datasets of machine-printed and handwritten manuscripts and compared to n-gram features (text-based), texture features and codebook features (imagebased) to validate the performance. The above proposed methods are robust against variation in handwritings, changes in the font (handwriting style) and presence of degradation and are flexible that can be used to various levels of layout (from a textline to paragraph). The methods in this research have been tested on datasets of handwritten and machine-printed manuscripts and compared to state-of-the-art methods. All of the evaluations show the efficiency, robustness and flexibility of the proposed methods for categorization of document image. As mentioned before the proposed strategies provide a framework for efficient and flexible representation and feature extraction for document image categorization. This frame work can be applied to different levels of layout, the information from different levels of layout can be merged and mixed and this framework can be extended to more complex situations and different tasks
    corecore