48 research outputs found

    Design for novel enhanced weightless neural network and multi-classifier.

    Get PDF
    Weightless neural systems have often struggles in terms of speed, performances, and memory issues. There is also lack of sufficient interfacing of weightless neural systems to others systems. Addressing these issues motivates and forms the aims and objectives of this thesis. In addressing these issues, algorithms are formulated, classifiers, and multi-classifiers are designed, and hardware design of classifier are also reported. Specifically, the purpose of this thesis is to report on the algorithms and designs of weightless neural systems. A background material for the research is a weightless neural network known as Probabilistic Convergent Network (PCN). By introducing two new and different interfacing method, the word "Enhanced" is added to PCN thereby giving it the name Enhanced Probabilistic Convergent Network (EPCN). To solve the problem of speed and performances when large-class databases are employed in data analysis, multi-classifiers are designed whose composition vary depending on problem complexity. It also leads to the introduction of a novel gating function with application of EPCN as an intelligent combiner. For databases which are not very large, single classifiers suffices. Speed and ease of application in adverse condition were considered as improvement which has led to the design of EPCN in hardware. A novel hashing function is implemented and tested on hardware-based EPCN. Results obtained have indicated the utility of employing weightless neural systems. The results obtained also indicate significant new possible areas of application of weightless neural systems

    Predictive modelling of Loss Of Consciousness under general anaesthesia

    Get PDF
    Treballs Finals de Grau d'Enginyeria BiomĂšdica. Facultat de Medicina i CiĂšncies de la Salut. Universitat de Barcelona. Curs: 2021-2022. Director: Pedro L. GambĂș

    Drawing, Handwriting Processing Analysis: New Advances and Challenges

    No full text
    International audienceDrawing and handwriting are communicational skills that are fundamental in geopolitical, ideological and technological evolutions of all time. drawingand handwriting are still useful in defining innovative applications in numerous fields. In this regard, researchers have to solve new problems like those related to the manner in which drawing and handwriting become an efficient way to command various connected objects; or to validate graphomotor skills as evident and objective sources of data useful in the study of human beings, their capabilities and their limits from birth to decline

    Analysis of microarray and next generation sequencing data for classification and biomarker discovery in relation to complex diseases

    Get PDF
    PhDThis thesis presents an investigation into gene expression profiling, using microarray and next generation sequencing (NGS) datasets, in relation to multi-category diseases such as cancer. It has been established that if the sequence of a gene is mutated, it can result in the unscheduled production of protein, leading to cancer. However, identifying the molecular signature of different cancers amongst thousands of genes is complex. This thesis investigates tools that can aid the study of gene expression to infer useful information towards personalised medicine. For microarray data analysis, this study proposes two new techniques to increase the accuracy of cancer classification. In the first method, a novel optimisation algorithm, COA-GA, was developed by synchronising the Cuckoo Optimisation Algorithm and the Genetic Algorithm for data clustering in a shuffle setup, to choose the most informative genes for classification purposes. Support Vector Machine (SVM) and Multilayer Perceptron (MLP) artificial neural networks are utilised for the classification step. Results suggest this method can significantly increase classification accuracy compared to other methods. An additional method involving a two-stage gene selection process was developed. In this method, a subset of the most informative genes are first selected by the Minimum Redundancy Maximum Relevance (MRMR) method. In the second stage, optimisation algorithms are used in a wrapper setup with SVM to minimise the selected genes whilst maximising the accuracy of classification. A comparative performance assessment suggests that the proposed algorithm significantly outperforms other methods at selecting fewer genes that are highly relevant to the cancer type, while maintaining a high classification accuracy. In the case of NGS, a state-of-the-art pipeline for the analysis of RNA-Seq data is investigated to discover differentially expressed genes and differential exon usages between normal and AIP positive Drosophila datasets, which are produced in house at Queen Mary, University of London. Functional genomic of differentially expressed genes were examined and found to be relevant to the case study under investigation. Finally, after normalising the RNA-Seq data, machine learning approaches similar to those in microarray was successfully implemented for these datasets

    Definition and evaluation of a family of shape factors for off-line signature verification

    Get PDF
    In a real situation, the choice of the best representation for the implementation of a signature verification system able to cope with all types of handwriting is a very difficult task. This study is original in that the design of the integrated classifiers is based on a large number of individual classifiers (or signature representations) in an attempt to overcome in some way the need for feature selection. In fact, the cooperation of a large number of classifiers is justified only if the cost of individual classifiers is low enough . This is why the extended shadow code (ESC) used as a class of shape factors tailor-made for the signature verification problem seems a good choice for the design of integrated classifiers E(x) .Nous proposons dans cet article une voie Ă  suivre pour tenter d'apporter une solution au problĂšme complexe qu'est la dĂ©finition d'un facteur de forme adaptĂ© au problĂšme de la vĂ©rification automatique des signatures manuscrites. Le codage de la signature obtenu de la projection locale du tracĂ© sur les segments d'un motif M(Îł) est un compromis entre les approches globales oĂč la silhouette de la signature est considĂ©rĂ©e comme un tout, et les approches locales oĂč des mesures sont effectuĂ©es sur des portions spĂ©cifiques du tracĂ©. InspirĂ© de ces deux familles d'approches, l'ESC est en fait une approche mixte qui permet d'effectuer des mesures locales sur la forme sans la segmenter en primitives Ă©lĂ©mentaires, une tĂąche trĂšs difficile en pratique. Ce travail porte principalement sur l'Ă©tude de l'influence de la rĂ©solution des motifs utilisĂ©s pour le codage de la signature (par la projection locale du tracĂ©), et sur la dĂ©finition d'un systĂšme de type multi-classifieurs pour tenter de rendre plus robuste la performance des systĂšmes de vĂ©rification de signatures

    A New Approach to Automatic Saliency Identification in Images Based on Irregularity of Regions

    Get PDF
    This research introduces an image retrieval system which is, in different ways, inspired by the human vision system. The main problems with existing machine vision systems and image understanding are studied and identified, in order to design a system that relies on human image understanding. The main improvement of the developed system is that it uses the human attention principles in the process of image contents identification. Human attention shall be represented by saliency extraction algorithms, which extract the salient regions or in other words, the regions of interest. This work presents a new approach for the saliency identification which relies on the irregularity of the region. Irregularity is clearly defined and measuring tools developed. These measures are derived from the formality and variation of the region with respect to the surrounding regions. Both local and global saliency have been studied and appropriate algorithms were developed based on the local and global irregularity defined in this work. The need for suitable automatic clustering techniques motivate us to study the available clustering techniques and to development of a technique that is suitable for salient points clustering. Based on the fact that humans usually look at the surrounding region of the gaze point, an agglomerative clustering technique is developed utilising the principles of blobs extraction and intersection. Automatic thresholding was needed in different stages of the system development. Therefore, a Fuzzy thresholding technique was developed. Evaluation methods of saliency region extraction have been studied and analysed; subsequently we have developed evaluation techniques based on the extracted regions (or points) and compared them with the ground truth data. The proposed algorithms were tested against standard datasets and compared with the existing state-of-the-art algorithms. Both quantitative and qualitative benchmarking are presented in this thesis and a detailed discussion for the results has been included. The benchmarking showed promising results in different algorithms. The developed algorithms have been utilised in designing an integrated saliency-based image retrieval system which uses the salient regions to give a description for the scene. The system auto-labels the objects in the image by identifying the salient objects and gives labels based on the knowledge database contents. In addition, the system identifies the unimportant part of the image (background) to give a full description for the scene

    Surveillance et diagnostic des défauts des machines tournantes dans le domaine temps-fréquences utilisant les réseaux de neurones et la logique floue

    Get PDF
    RÉSUMÉ Ces derniĂšres annĂ©es, la surveillance et le diagnostic des machines tournantes sont devenus un outil efficace pour dĂ©tecter de façon prĂ©coce les dĂ©fauts et en suivre 1â€˜Ă©volution dans le temps. La maintenance des machines nĂ©cessite une bonne comprĂ©hension des phĂ©nomĂšnes liĂ©s Ă  l‘apparition et au dĂ©veloppement des dĂ©fauts. DĂ©tecter l‘apparition d‘un dĂ©faut Ă  un stade prĂ©coce et suivre son Ă©volution prĂ©sente un grand intĂ©rĂȘt industriel. En effet, il existe un vaste choix de techniques de traitement de signal appliquĂ©es au diagnostic des machines mais l‘opinion gĂ©nĂ©rale est que ces techniques ne sont pas suffisamment efficaces et fiables. L‘intĂ©rĂȘt Ă©conomique de mettre en place une mĂ©thode automatique de maintenance prĂ©visionnelle favorise les programmes de recherche en techniques de traitement du signal. Les techniques de traitement du signal dans le domaine de temps et de frĂ©quence peuvent ĂȘtre utilisĂ©es pour identifier et isoler les dĂ©fauts dans une machine tournante. L‘analyse du spectre d‘un signal peut nous aider Ă  dĂ©tecter l‘apparition d‘un dĂ©faut tandis que la dĂ©composition de ce signal dans le temps peut nous fournir la nature et la position de ce dĂ©faut. Bien que ces techniques s‘avĂšrent trĂšs utiles dans des cas simples et permettent la formulation rapide d‘un prĂ©-diagnostic, elles prĂ©sentent par contre un certain nombre d‘inconvĂ©nients qui peuvent conduire souvent Ă  la formulation de diagnostics erronĂ©s. La localisation de l‘origine des chocs et des phĂ©nomĂšnes de modulation et, en particulier, des Ă©vĂ©nements non stationnaires ou cyclo-stationnaires nĂ©cessite la mise en oeuvre de techniques encore plus Ă©laborĂ©es, basĂ©es sur l‘analyse tridimensionnelle (temps-frĂ©quence-amplitude). En pratique, les signatures vibratoires mesurĂ©es Ă  l‘aide de capteurs de vibration contiennent plusieurs composantes qui sont plus ou moins utiles Ă  la caractĂ©risation du signal, et rendent difficile l‘interprĂ©tation des rĂ©sultats issus de ces analyses. Face Ă  cette complexitĂ© grandissante, la recherche scientifique s‘est orientĂ©e vers l‘utilisation des mĂ©thodes intelligentes qui permettent de reprĂ©senter lâ€˜Ă©tat de la machine dans un espace de grande dimension, pour faciliter la dĂ©termination de lâ€˜Ă©tat de cette machine.----------ABSTRACT The machines monitoring and diagnosis using vibration analysis are an effective tool for early faults detection and continuous tracing of their evolution in time. Machine maintenance requires a good understanding of the phenomena related to the onset and development of faults. Detecting their occurrence at an early stage and following their evolution is of a great interest. There is a wide range of signal processing techniques applied to machines diagnosis, but the general opinion is that these techniques are not sufficiently effective and reliable. The economic interest to develop an automatic method of predictive maintenance promotes research programs in signal processing techniques. The current thesis objective is to propose an intelligent detection system for locating, detecting, and even classifying (identifying) faults in rotating machinery components. The intelligent systems previously developed have the same characteristics: they require advanced knowledge in computer science and signal analysis to be expanded and exploited and the establishment of a complex process for their deployment and operation. The architecture proposed in this study for this smart detection prototype seeks to facilitate as much as possible its use and configuration in order to minimize the associated costs. In the first step of this study, we participated to a development of an in-house software of signal processing in time, frequency, time-frequency and time-scale (wavelets) domain. To implement the efficiency of this program we proceeded to an experimental tests using a test rig which we conceived especially at École Polytechnique for this purpose and also industrial tests to determine the main causes of damage of different components in rotating machinery. For this purpose, several linear and bilinear distributions that the software contains were used. For an industrial tests, the Choi-Williams representation was selected as higher among all of the others distributions to transform temporal signals to time-frequency domain. Indeed, this representation presented lowest interference and cross term effects related to other representations. In this part of study, we have shown that most conventional methods such as the spectral analysis are applicable for a single defect on a simple machine components and that none of these methods can provide a precise answer to all the problems of machines diagnostic. We also demonstrated that time-frequency representation is a solution which can bring many advantages and facilitate diagnosis. Indeed, the choice of a distribution in industrial application depends on the concerned problem and none of these distributions can accurately resolve all problems

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Detecting semantic concepts in digital photographs: low-level features vs. non-homogeneous data fusion

    Get PDF
    Semantic concepts, such as faces, buildings, and other real world objects, are the most preferred instrument that humans use to navigate through and retrieve visual content from large multimedia databases. Semantic annotation of visual content in large collections is therefore essential if ease of access and use is to be ensured. Classification of images into broad categories such as indoor/outdoor, building/non-building, urban/landscape, people/no-people, etc., allows us to obtain the semantic labels without the full knowledge of all objects in the scene. Inferring the presence of high-level semantic concepts from low-level visual features is a research topic that has been attracting a significant amount of interest lately. However, the power of lowlevel visual features alone has been shown to be limited when faced with the task of semantic scene classification in heterogeneous, unconstrained, broad-topic image collections. Multi-modal fusion or combination of information from different modalities has been identified as one possible way of overcoming the limitations of single-mode approaches. In the field of digital photography, the incorporation of readily available camera metadata, i.e. information about the image capture conditions stored in the EXIF header of each image, along with the GPS information, offers a way to move towards a better understanding of the imaged scene. In this thesis we focus on detection of semantic concepts such as artificial text in video and large buildings in digital photographs, and examine how fusion of low-level visual features with selected camera metadata, using a Support Vector Machine as an integration device, affects the performance of the building detector in a genuine personal photo collection. We implemented two approaches to detection of buildings that combine content-based and the context-based information, and an approach to indoor/outdoor classification based exclusively on camera metadata. An outdoor detection rate of 85.6% was obtained using camera metadata only. The first approach to building detection, based on simple edge orientation-based features extracted at three different scales, has been tested on a dataset of 1720 outdoor images, with a classification accuracy of 88.22%. The second approach integrates the edge orientation-based features with the camera metadata-based features, both at the feature and at the decision level. The fusion approaches have been evaluated using an unconstrained dataset of 8000 genuine consumer photographs. The experiments demonstrate that the fusion approaches outperform the visual features-only approach by of 2-3% on average regardless of the operating point chosen, while all the performance measures are approximately 4% below the upper limit of performance. The early fusion approach consistently improves all performance measures
    corecore