2,146 research outputs found
Drawing, Handwriting Processing Analysis: New Advances and Challenges
International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline
Design for novel enhanced weightless neural network and multi-classifier.
Weightless neural systems have often struggles in terms of speed, performances, and memory issues. There is also lack of sufficient interfacing of weightless neural systems to others systems. Addressing these issues motivates and forms the aims and objectives of this thesis. In addressing these issues, algorithms are formulated, classifiers, and multi-classifiers are designed, and hardware design of classifier are also reported. Specifically, the purpose of this thesis is to report on the algorithms and designs of weightless neural systems.
A background material for the research is a weightless neural network known as Probabilistic Convergent Network (PCN). By introducing two new and different interfacing method, the word "Enhanced" is added to PCN thereby giving it the name Enhanced Probabilistic Convergent Network (EPCN). To solve the problem of speed and performances when large-class databases are employed in data analysis, multi-classifiers are designed whose composition vary depending on problem complexity. It also leads to the introduction of a novel gating function with application of EPCN as an intelligent combiner. For databases which are not very large, single classifiers suffices. Speed and ease of application in adverse condition were considered as improvement which has led to the design of EPCN in hardware. A novel hashing function is implemented and tested on hardware-based EPCN.
Results obtained have indicated the utility of employing weightless neural systems. The results obtained also indicate significant new possible areas of application of weightless neural systems
Artificial intelligence based writer identification generates new evidence for the unknown scribes of the Dead Sea Scrolls exemplified by the Great Isaiah Scroll (1QIsaa)
The Dead Sea Scrolls are tangible evidence of the Bible's ancient scribal culture. This study takes an innovative approach to palaeography-the study of ancient handwriting-as a new entry point to access this scribal culture. One of the problems of palaeography is to determine writer identity or difference when the writing style is near uniform. This is exemplified by the Great Isaiah Scroll (1QIsaa). To this end, we use pattern recognition and artificial intelligence techniques to innovate the palaeography of the scrolls and to pioneer the microlevel of individual scribes to open access to the Bible's ancient scribal culture. We report new evidence for a breaking point in the series of columns in this scroll. Without prior assumption of writer identity, based on point clouds of the reduced-dimensionality feature-space, we found that columns from the first and second halves of the manuscript ended up in two distinct zones of such scatter plots, notably for a range of digital palaeography tools, each addressing very different featural aspects of the script samples. In a secondary, independent, analysis, now assuming writer difference and using yet another independent feature method and several different types of statistical testing, a switching point was found in the column series. A clear phase transition is apparent in columns 27-29. We also demonstrated a difference in distance variances such that the variance is higher in the second part of the manuscript. Given the statistically significant differences between the two halves, a tertiary, post-hoc analysis was performed using visual inspection of character heatmaps and of the most discriminative Fraglet sets in the script. Demonstrating that two main scribes, each showing different writing patterns, were responsible for the Great Isaiah Scroll, this study sheds new light on the Bible's ancient scribal culture by providing new, tangible evidence that ancient biblical texts were not copied by a single scribe only but that multiple scribes, while carefully mirroring another scribe's writing style, could closely collaborate on one particular manuscript
Modern Information Systems
The development of modern information systems is a demanding task. New technologies and tools are designed, implemented and presented in the market on a daily bases. User needs change dramatically fast and the IT industry copes to reach the level of efficiency and adaptability for its systems in order to be competitive and up-to-date. Thus, the realization of modern information systems with great characteristics and functionalities implemented for specific areas of interest is a fact of our modern and demanding digital society and this is the main scope of this book. Therefore, this book aims to present a number of innovative and recently developed information systems. It is titled "Modern Information Systems" and includes 8 chapters. This book may assist researchers on studying the innovative functions of modern systems in various areas like health, telematics, knowledge management, etc. It can also assist young students in capturing the new research tendencies of the information systems' development
Recommended from our members
HOLMES: A Hybrid Ontology-Learning Materials Engineering System
Designing and discovering novel materials is challenging problem in many domains such as fuel additives, composites, pharmaceuticals, and so on. At the core of all this are models that capture how the different domain-specific data, information, and knowledge regarding the structures and properties of the materials are related to one another. This dissertation explores the difficult task of developing an artificial intelligence-based knowledge modeling environment, called Hybrid Ontology-Learning Materials Engineering System (HOLMES) that can assist humans in populating a materials science and engineering ontology through automatic information extraction from journal article abstracts. While what we propose may be adapted for a generic materials engineering application, our focus in this thesis is on the needs of the pharmaceutical industry. We develop the Columbia Ontology for Pharmaceutical Engineering (COPE), which is a modification of the Purdue Ontology for Pharmaceutical Engineering. COPE serves as the basis for HOLMES.
The HOLMES framework starts with journal articles that are in the Portable Document Format (PDF) and ends with the assignment of the entries in the journal articles into ontologies. While this might seem to be a simple task of information extraction, to fully extract the information such that the ontology is filled as completely and correctly as possible is not easy when considering a fully developed ontology.
In the development of the information extraction tasks, we note that there are new problems that have not arisen in previous information extraction work in the literature. The first is the necessity to extract auxiliary information in the form of concepts such as actions, ideas, problem specifications, properties, etc. The second problem is in the existence of multiple labels for a single token due to the existence of the aforementioned concepts. These two problems are the focus of this dissertation.
In this work, the HOLMES framework is presented as a whole, describing our successful progress as well as unsolved problems, which might help future research on this topic. The ontology is then presented to help in the identification of the relevant information that needs to be retrieved. The annotations are next developed to create the data sets necessary for the machine learning algorithms to perform. Then, the current level of information extraction for these concepts is explored and expanded. This is done through the introduction of entity feature sets that are based on previously extracted entities from the entity recognition task. And finally, the new task of handling multiple labels for tagging a single entity is also explored by the use of multiple-label algorithms used primarily in image processing
End-Shape Analysis for Automatic Segmentation of Arabic Handwritten Texts
Word segmentation is an important task for many methods that are related to document understanding especially word spotting and word recognition. Several approaches of word segmentation have been proposed for Latin-based languages while a few of them have been introduced for Arabic texts. The fact that Arabic writing is cursive by nature and unconstrained with no clear boundaries between the words makes the processing of Arabic handwritten text a more challenging problem.
In this thesis, the design and implementation of an End-Shape Letter (ESL) based segmentation system for Arabic handwritten text is presented. This incorporates four novel aspects: (i) removal of secondary components, (ii) baseline estimation, (iii) ESL recognition, and (iv) the creation of a new off-line CENPARMI ESL database.
Arabic texts include small connected components, also called secondary components. Removing these components can improve the performance of several systems such as baseline estimation. Thus, a robust method to remove secondary components that takes into consideration the challenges in the Arabic handwriting is introduced. The methods reconstruct the image based on some criteria. The results of this method were subsequently compared with those of two other methods that used the same database. The results show that the proposed method is effective.
Baseline estimation is a challenging task for Arabic texts since it includes ligature, overlapping, and secondary components. Therefore, we propose a learning-based approach that addresses these challenges. Our method analyzes the image and extracts baseline dependent features. Then, the baseline is estimated using a classifier.
Algorithms dealing with text segmentation usually analyze the gaps between connected components. These algorithms are based on metric calculation, finding threshold, and/or gap classification. We use two well-known metrics: bounding box and convex hull to test metric-based method on Arabic handwritten texts, and to include this technique in our approach. To determine the threshold, an unsupervised learning approach, known as the Gaussian Mixture Model, is used. Our ESL-based segmentation approach extracts the final letter of a word using rule-based technique and recognizes these letters using the implemented ESL classifier.
To demonstrate the benefit of text segmentation, a holistic word spotting system is implemented. For this system, a word recognition system is implemented. A series of experiments with different sets of features are conducted. The system shows promising results
An examination of quantitative methods for Forensic Signature Analysis and the admissibility of signature verification system as legal evidence.
The experiments described in this thesis deal with handwriting characteristics which are involved in the production of forged and genuine signatures and complexity of signatures. The objectives of this study were (1) to provide su?cient details on which of the signature characteristics are easier to forge, (2) to investigate the capabilities of the signature complexity formula given by Found et al. based on a different signature database provided by University of Kent. This database includes the writing movements of 10 writers producing their genuine signature and of 140 writers forging these sample signatures. Using the 150 genuine signatures without constrictions of the Kentâs database an evaluation of the complexity formula suggested in Found et al took place divided the signature in three categories low, medium and high graphical complexity. The results of the formula implementation were compared with the opinions of three leading professional forensic document examiners employed by Key Forensics in the UK.
The analysis of data for Study I reveals that there is not ample evidence that high quality forgeries are possible after training. In addition, a closer view of the kinematics of the forging writers is responsible for our main conclusion, that forged signatures are widely different from genuine especially in the kinematic domain. From all the parameters used in this study 11 out of 15 experienced significant changes when the comparison of the two groups (genuine versus forged signature) took place and gave a clear picture of which parameters can assist forensic document examiners and can be used by them to examine the signatures forgeries. The movements of the majority of forgers are signi?cantly slower than those of authentic writers. It is also clearly recognizable that the majority of forgers perform higher levels of pressure when trying to forge the genuine signature. The results of Study II although limited and not entirely consistent with the study of Found that proposed this model, indicate that the model can provide valuable objective evidence (regarding complex signatures) in the forensic environment and justify its further investigation but more work is need to be done in order to use this type of models in the court of law. The model was able to predict correctly only 53% of the FDEs opinion regarding the complexity of the signatures.
Apart from the above investigations in this study there will be also a reference at the debate which has started in recent years that is challenging the validity of forensic handwriting expertsâ skills and at the effort which has begun by interested parties of this sector to validate and standardise the field of forensic handwriting examination and a discussion started. This effort reveals that forensic document analysis field meets all factors which were set by Daubert ruling in terms of theory proven, education, training, certification, falsifiability, error rate, peer review and publication, general acceptance. However innovative methods are needed for the development of forensic document analysis discipline. Most modern and effective solution in order to prevent observational and emotional bias would be the development of an automated handwriting or signature analysis system. This system will have many advantages in real cases scenario. In addition the significant role of computer-assisted handwriting analysis in the daily work of forensic document examiners (FDE) or the judicial system is in agreement with the assessment of the National Research Council of United States that âthe scientific basis for handwriting comparison needs to be strengthenedâ, however it seems that further research is required in order to be able these systems to reach the accomplishment point of this objective and overcome legal obstacles presented in this study
- âŠ