51 research outputs found

    Learning Interpretable Rules for Scalable Data Representation and Classification

    Full text link
    Rule-based models, e.g., decision trees, are widely used in scenarios demanding high model interpretability for their transparent inner structures and good model expressivity. However, rule-based models are hard to optimize, especially on large data sets, due to their discrete parameters and structures. Ensemble methods and fuzzy/soft rules are commonly used to improve performance, but they sacrifice the model interpretability. To obtain both good scalability and interpretability, we propose a new classifier, named Rule-based Representation Learner (RRL), that automatically learns interpretable non-fuzzy rules for data representation and classification. To train the non-differentiable RRL effectively, we project it to a continuous space and propose a novel training method, called Gradient Grafting, that can directly optimize the discrete model using gradient descent. A novel design of logical activation functions is also devised to increase the scalability of RRL and enable it to discretize the continuous features end-to-end. Exhaustive experiments on ten small and four large data sets show that RRL outperforms the competitive interpretable approaches and can be easily adjusted to obtain a trade-off between classification accuracy and model complexity for different scenarios. Our code is available at: https://github.com/12wang3/rrl.Comment: Accepted by IEEE TPAMI in October 2023; Interpretable ML; Neuro-Symbolic AI; Preliminary conference version (NeurIPS 2021) available at arXiv:2109.1510

    Structure Extraction in Printed Documents Using Neural Approaches

    Get PDF
    This paper addresses the problem of layout and logical structure extraction from image documents. Two classes of approaches are first studied and discussed in general terms: data-driven and model-driven. In the latter, some specific approaches like rule-based or formal grammar are usually studied on very stereotyped documents providing honest results, while in the former artificial neural networks are often considered for small patterns with good results. Our understanding of these techniques let us to believe that a hybrid model is a more appropriate solution for structure extraction. Based on this standpoint, we proposed a Perceptive Neural Network based approach using a static topology that possesses the characteristics of a dynamic neural network. Thanks to its transparency, it allows a better representation of the model elements and the relationships between the logical and the physical components. Furthermore, it possesses perceptive cycles providing some capacities in data refinement and correction. Tested on several kinds of documents, the results are better than those of a static Multilayer Perceptron

    Finding Interpretable Class-Specific Patterns through Efficient Neural Search

    Full text link
    Discovering patterns in data that best describe the differences between classes allows to hypothesize and reason about class-specific mechanisms. In molecular biology, for example, this bears promise of advancing the understanding of cellular processes differing between tissues or diseases, which could lead to novel treatments. To be useful in practice, methods that tackle the problem of finding such differential patterns have to be readily interpretable by domain experts, and scalable to the extremely high-dimensional data. In this work, we propose a novel, inherently interpretable binary neural network architecture DIFFNAPS that extracts differential patterns from data. DiffNaps is scalable to hundreds of thousands of features and robust to noise, thus overcoming the limitations of current state-of-the-art methods in large-scale applications such as in biology. We show on synthetic and real world data, including three biological applications, that, unlike its competitors, DiffNaps consistently yields accurate, succinct, and interpretable class description

    Neuro-symbolic Models for Interpretable Time Series Classification using Temporal Logic Description

    Full text link
    Most existing Time series classification (TSC) models lack interpretability and are difficult to inspect. Interpretable machine learning models can aid in discovering patterns in data as well as give easy-to-understand insights to domain specialists. In this study, we present Neuro-Symbolic Time Series Classification (NSTSC), a neuro-symbolic model that leverages signal temporal logic (STL) and neural network (NN) to accomplish TSC tasks using multi-view data representation and expresses the model as a human-readable, interpretable formula. In NSTSC, each neuron is linked to a symbolic expression, i.e., an STL (sub)formula. The output of NSTSC is thus interpretable as an STL formula akin to natural language, describing temporal and logical relations hidden in the data. We propose an NSTSC-based classifier that adopts a decision-tree approach to learn formula structures and accomplish a multiclass TSC task. The proposed smooth activation functions for wSTL allow the model to be learned in an end-to-end fashion. We test NSTSC on a real-world wound healing dataset from mice and benchmark datasets from the UCR time-series repository, demonstrating that NSTSC achieves comparable performance with the state-of-the-art models. Furthermore, NSTSC can generate interpretable formulas that match with domain knowledge

    Observações em redes neuronais

    Get PDF
    The many advances that machine learning, and especially its workhorse, deep learning, has provided to our society are undeniable. However, there is an increasing feeling that the field has become little understood, with researchers going as far as to make the analogy that it has developed into a form of alchemy. There is the need for a deeper understanding of the tools being used since, otherwise, one is only making progress in the dark, frequently relying on trial and error. In this thesis, we experiment with feedforward neural networks, trying to deconstruct the phenomenons we observe, and finding their root cause. We start by experimenting with a synthetic dataset. Using this toy problem, we find that the weights of trained networks show correlations that can be well-understood by the structure of the data samples themselves. This insight may be useful in areas such as Explainable Artificial Intelligence, to explain why a model behaves the way it does. We also find that the mere change of the activation function used in a layer may cause the nodes of the network to assume fundamentally different roles. This understanding may help to draw firm conclusions regarding the conditions in which Transfer Learning may be applied successfully. While testing with this problem, we also found that the initial configuration of weights of a network may, in some situations, ultimately determine the quality of the minimum (i.e., loss/accuracy) to which the networks converge, more so than what could be initially suspected. This observation motivated the remainder of our experiments. We continued our tests with the real-world datasets MNIST and HASYv2. We devised an initialization strategy, which we call the Dense sliced initialization, that works by combining the merits of a sparse initialization with those of a typical random initialization. Afterward, we found that the initial configuration of weights of a network “sticks” throughout training, suggesting that training does not imply substantial updates — instead, it is, to some extent, a fine-tuning process. We saw this by training networks marked with letters, and observing that those marks last throughout hundreds of epochs. Moreover, our results suggest that the small scale of the deviations caused by the training process is a fingerprint (i.e., a necessary condition) of training — as long as the training is successful, the marks remain visible. Based on these observations and our intuition for the reasons behind them, we developed what we call the Filter initialization strategy. It showed improvements in the training of the networks tested, but at the same time, it worsened their generalization. Understanding the root cause for these observations may prove to be valuable to devise new initialization methods that generalize better.É impossível ignorar os muitos avanços que aprendizagem automática, e em particular o seu método de eleição, aprendizagem profunda, têm proporcionado à nossa sociedade. No entanto, existe um sentimento crescente de que ao longo dos anos a área se tem vindo a tornar confusa e pouco clara, com alguns investigadores inclusive afirmando que aprendizagem automática se tornou na alquimia dos nossos tempos. Existe uma necessidade crescente de (voltar a) compreender em profundidade as ferramentas usadas, já que de outra forma o progresso acontece às escuras e, frequentemente, por tentativa e erro. Nesta dissertação conduzimos testes com redes neuronais artificiais dirigidas, com o objetivo de compreender os fenómenos subjacentes e encontrar as suas causas. Começamos por testar com um conjunto de dados sintético. Usando um problema amostra, descobrimos que a configuração dos pesos de redes treinadas evolui de forma a mostrar correlações que podem ser compreendidas atendendo à estrutura das amostras do próprio conjunto de dados. Esta observação poderá revelar-se útil em áreas como Inteligência Artificial Explicável, de forma a clarificar porque é que um dado modelo funciona de certa forma. Descobrimos também que a mera alteração da função de ativação de uma camada pode causar alterações organizacionais numa rede, a nível do papel que os nós nela desempenham. Este conhecimento poderá ser usado em áreas como Aprendizagem por Transferência, de forma a desenvolver critérios precisos sobre os limites/condições de aplicabilidade destas técnicas. Enquanto experimentávamos com este problema, descobrimos também que a configuração inicial dos pesos de uma rede pode condicionar totalmente a qualidade do mínimo para que ela converge, mais do que poderia ser esperado. Esta observação motiva os nossos restantes resultados. Continuamos testes com conjuntos de dados do mundo real, em particular com o MNIST e HASYv2. Desenvolvemos uma estratégia de inicialização, à qual chamamos de inicialização densa por fatias, que funciona combinado os méritos de uma inicialização esparsa com os de uma inicialização típica (densa). Descobrimos também que a configuração inicial dos pesos de uma rede persiste ao longo do seu treino, sugerindo que o processo de treino não causa atualizações bruscas dos pesos. Ao invés, é maioritariamente um processo de afinação. Visualizamos este efeito ao marcar as camadas de uma rede com letras do abecedário e observar que as marcas se mantêm por centenas de épocas de treino. Mais do que isso, a escala reduzida das atualizações dos pesos aparenta ser uma impressão digital (isto é, uma condição necessária) de treino com sucesso — enquanto o treino é bem sucedido, as marcas permanecem. Baseados neste conhecimento propusemos uma estratégia de inicialização inspirada em filtros. A estratégia mostrou bons resultados durante o treino das redes testadas, mas simultaneamente piorou a sua generalização. Perceber as razões por detrás deste fenómeno pode permitir desenvolver novas estratégias de inicialização que generalizem melhor que as atuais.Mestrado em Engenharia de Computadores e Telemátic

    Applicability and Interpretability of Logical Analysis of Data in Condition Based Maintenance

    Get PDF
    Résumé Cette thèse étudie l’applicabilité et l’adaptabilité d’une approche d’exploration de données basée sur l’intelligence artificielle proposée dans [Hammer, 1986] et appelée analyse logique de données (LAD) aux applications diagnostiques dans le domaine de la maintenance conditionnelle CBM). La plupart des technologies utilisées à ce jour pour la prise de décision dans la maintenance conditionnelle ont tendance à automatiser le processus de diagnostic, sans offrir aucune connaissance ajoutée qui pourrait être utile à l’opération de maintenance et au personnel de maintenance. Par comparaison à d’autres techniques de prise de décision dans le domaine de la CBM, la LAD possède deux avantages majeurs : (1) il s’agit d’une approche non statistique, donc les données n’ont pas à satisfaire des suppositions statistiques et (2) elle génère des formes interprétables qui pourraient aider à résoudre les problèmes de maintenance. Une étude sur l’application de la LAD dans la maintenance conditionnelle est présentée dans cette recherche dont l’objectif est (1) d’étudier l’applicabilité de la LAD dans des situations différentes qui nécessitent des considérations particulières concernant les types de données d’entrée et les décisions de maintenance, (2) d’adapter la méthode LAD aux exigences particulières qui se posent à partir de ces applications et (3) d’améliorer la méthodologie LAD afin d’augmenter l’exactitude de diagnostic et d’interprétation de résultats. Les aspects innovants de la recherche présentés dans cette thèse sont (1) l’application de la LAD dans la CBM pour la première fois dans des applications qui bénéficient des propriétés uniques de cette technologie et (2) les modifications innovatrices de la méthodologie de la LAD, en particulier dans le domaine de la génération des formes, afin d’améliorer ses performances dans le cadre de la CBM et dans le domaine de classification multiclasses. La recherche menée dans cette thèse a suivi une approche évolutive afin d’atteindre les objectifs énoncés ci-dessus. La LAD a été utilisée et adaptée à trois applications : (1) la détection des composants malveillants (Rogue) dans l’inventaire de pièces de rechange réparables d’une compagnie aérienne commerciale, (2) la détection et l’identification des défauts dans les transformateurs de puissance en utilisant la DGA et (3) la détection des défauts dans les rotors en utilisant des signaux de vibration. Cette recherche conclut que la LAD est une approche de prise de décision prometteuse qui ajoute d’importants avantages à la mise en oeuvre de la CBM dans l’industrie.----------Abstract This thesis studies the applicability and adaptability of a data mining artificial intelligence approach called Logical Analysis of Data (LAD) to diagnostic applications in Condition Based Maintenance (CBM). Most of the technologies used so far for decision support in CBM tend to automate the diagnostic process without offering any added knowledge that could be helpful to the maintenance operation and maintenance personnel. LAD possesses two key advantages over other decision making technologies used in CBM: (1) it is a non-statistical approach; as such no statistical assumptions are required for the input data, and (2) it generates interpretable patterns that could help solve maintenance problems. A study on the implementation of LAD in CBM is presented in this research whose objective are to study the applicability of LAD in different CBM situations requiring special considerations regarding the types of input data and maintenance decisions, adapt the LAD methodology to the particular requirements that arise from these applications, and improve the LAD methodology in line with the above two objectives in order to increase diagnosis accuracy and result interpretability. The novelty of the research presented in this thesis is (1) the application of LAD to CBM for the first time in applications that stand to benefit from the advantages that this technology provides; and (2) the innovative modifications to LAD methodology, particularly in the area of pattern generation, in order to improve its performance within the context of CBM. The research conducted in this thesis followed an evolutionary approach in order to achieve the objectives stated in the Introduction. The research applied LAD in three applications: (1) the detection of Rogue components within the spare part inventory of reparable components in a commercial airline company, (2) the detection and identification of faults in power transformers using DGA, and (3) the detection of faults in rotor bearings using vibration signals. This research concludes that LAD is a promising decision making approach that adds important benefits to the implementation of CBM in the industry

    Artificial neural network and its applications in quality process control, document recognition and biomedical imaging

    Get PDF
    In computer-vision based system a digital image obtained by a digital camera would usually have 24-bit color image. The analysis of an image with that many levels might require complicated image processing techniques and higher computational costs. But in real-time application, where a part has to be inspected within a few milliseconds, either we have to reduce the image to a more manageable number of gray levels, usually two levels (binary image), and at the same time retain all necessary features of the original image or develop a complicated technique. A binary image can be obtained by thresholding the original image into two levels. Therefore, thresholding of a given image into binary image is a necessary step for most image analysis and recognition techniques. In this thesis, we have studied the effectiveness of using artificial neural network (ANN) in pharmaceutical, document recognition and biomedical imaging applications for image thresholding and classification purposes. Finally, we have developed edge-based, ANN-based and region-growing based image thresholding techniques to extract low contrast objects of interest and classify them into respective classes in those applications. Real-time quality inspection of gelatin capsules in pharmaceutical applications is an important issue from the point of view of industry\u27s productivity and competitiveness. Computer vision-based automatic quality inspection and controller system is one of the solutions to this problem. Machine vision systems provide quality control and real-time feedback for industrial processes, overcoming physical limitations and subjective judgment of humans. In this thesis, we have developed an image processing system using edge-based image thresholding techniques for quality inspection that satisfy the industrial requirements in pharmaceutical applications to pass the accepted and rejected capsules. In document recognition application, success of OCR mostly depends on the quality of the thresholded image. Non-uniform illumination, low contrast and complex background make it challenging in this application. In this thesis, optimal parameters for ANN-based local thresholding approach for gray scale composite document image with non-uniform background is proposed. An exhaustive search was conducted to select the optimal features and found that pixel value, mean and entropy are the most significant features at window size 3x3 in this application. For other applications, it might be different, but the procedure to find the optimal parameters is same. The average recognition rate 99.25% shows that the proposed 3 features at window size 3x3 are optimal in terms of recognition rate and PSNR compare to the ANN-based thresholding technique with different parameters presented in the literature. In biomedical imaging application, breast cancer continues to be a public health problem. In this thesis we presented a computer aided diagnosis (CAD) system for mass detection and classification in digitized mammograms, which performs mass detection on regions of interest (ROI) followed by the benign-malignant classification on detected masses. Three layers ANN with seven features is proposed for classifying the marked regions into benign and malignant and 90.91% sensitivity and 83.87% specificity is achieved that is very much promising compare to the radiologist\u27s sensitivity 75%

    Neural Networks for Document Image and Text Processing

    Full text link
    Nowadays, the main libraries and document archives are investing a considerable effort on digitizing their collections. Indeed, most of them are scanning the documents and publishing the resulting images without their corresponding transcriptions. This seriously limits the document exploitation possibilities. When the transcription is necessary, it is manually performed by human experts, which is a very expensive and error-prone task. Obtaining transcriptions to the level of required quality demands the intervention of human experts to review and correct the resulting output of the recognition engines. To this end, it is extremely useful to provide interactive tools to obtain and edit the transcription. Although text recognition is the final goal, several previous steps (known as preprocessing) are necessary in order to get a fine transcription from a digitized image. Document cleaning, enhancement, and binarization (if they are needed) are the first stages of the recognition pipeline. Historical Handwritten Documents, in addition, show several degradations, stains, ink-trough and other artifacts. Therefore, more sophisticated and elaborate methods are required when dealing with these kind of documents, even expert supervision in some cases is needed. Once images have been cleaned, main zones of the image have to be detected: those that contain text and other parts such as images, decorations, versal letters. Moreover, the relations among them and the final text have to be detected. Those preprocessing steps are critical for the final performance of the system since an error at this point will be propagated during the rest of the transcription process. The ultimate goal of the Document Image Analysis pipeline is to receive the transcription of the text (Optical Character Recognition and Handwritten Text Recognition). During this thesis we aimed to improve the main stages of the recognition pipeline, from the scanned documents as input to the final transcription. We focused our effort on applying Neural Networks and deep learning techniques directly on the document images to extract suitable features that will be used by the different tasks dealt during the following work: Image Cleaning and Enhancement (Document Image Binarization), Layout Extraction, Text Line Extraction, Text Line Normalization and finally decoding (or text line recognition). As one can see, the following work focuses on small improvements through the several Document Image Analysis stages, but also deals with some of the real challenges: historical manuscripts and documents without clear layouts or very degraded documents. Neural Networks are a central topic for the whole work collected in this document. Different convolutional models have been applied for document image cleaning and enhancement. Connectionist models have been used, as well, for text line extraction: first, for detecting interest points and combining them in text segments and, finally, extracting the lines by means of aggregation techniques; and second, for pixel labeling to extract the main body area of the text and then the limits of the lines. For text line preprocessing, i.e., to normalize the text lines before recognizing them, similar models have been used to detect the main body area and then to height-normalize the images giving more importance to the central area of the text. Finally, Convolutional Neural Networks and deep multilayer perceptrons have been combined with hidden Markov models to improve our transcription engine significantly. The suitability of all these approaches has been tested with different corpora for any of the stages dealt, giving competitive results for most of the methodologies presented.Hoy en día, las principales librerías y archivos está invirtiendo un esfuerzo considerable en la digitalización de sus colecciones. De hecho, la mayoría están escaneando estos documentos y publicando únicamente las imágenes sin transcripciones, limitando seriamente la posibilidad de explotar estos documentos. Cuando la transcripción es necesaria, esta se realiza normalmente por expertos de forma manual, lo cual es una tarea costosa y propensa a errores. Si se utilizan sistemas de reconocimiento automático se necesita la intervención de expertos humanos para revisar y corregir la salida de estos motores de reconocimiento. Por ello, es extremadamente útil para proporcionar herramientas interactivas con el fin de generar y corregir la transcripciones. Aunque el reconocimiento de texto es el objetivo final del Análisis de Documentos, varios pasos previos (preprocesamiento) son necesarios para conseguir una buena transcripción a partir de una imagen digitalizada. La limpieza, mejora y binarización de las imágenes son las primeras etapas del proceso de reconocimiento. Además, los manuscritos históricos tienen una mayor dificultad en el preprocesamiento, puesto que pueden mostrar varios tipos de degradaciones, manchas, tinta a través del papel y demás dificultades. Por lo tanto, este tipo de documentos requiere métodos de preprocesamiento más sofisticados. En algunos casos, incluso, se precisa de la supervisión de expertos para garantizar buenos resultados en esta etapa. Una vez que las imágenes han sido limpiadas, las diferentes zonas de la imagen deben de ser localizadas: texto, gráficos, dibujos, decoraciones, letras versales, etc. Por otra parte, también es importante conocer las relaciones entre estas entidades. Estas etapas del pre-procesamiento son críticas para el rendimiento final del sistema, ya que los errores cometidos en aquí se propagarán al resto del proceso de transcripción. El objetivo principal del trabajo presentado en este documento es mejorar las principales etapas del proceso de reconocimiento completo: desde las imágenes escaneadas hasta la transcripción final. Nuestros esfuerzos se centran en aplicar técnicas de Redes Neuronales (ANNs) y aprendizaje profundo directamente sobre las imágenes de los documentos, con la intención de extraer características adecuadas para las diferentes tareas: Limpieza y Mejora de Documentos, Extracción de Líneas, Normalización de Líneas de Texto y, finalmente, transcripción del texto. Como se puede apreciar, el trabajo se centra en pequeñas mejoras en diferentes etapas del Análisis y Procesamiento de Documentos, pero también trata de abordar tareas más complejas: manuscritos históricos, o documentos que presentan degradaciones. Las ANNs y el aprendizaje profundo son uno de los temas centrales de esta tesis. Diferentes modelos neuronales convolucionales se han desarrollado para la limpieza y mejora de imágenes de documentos. También se han utilizado modelos conexionistas para la extracción de líneas: primero, para detectar puntos de interés y segmentos de texto y, agregarlos para extraer las líneas del documento; y en segundo lugar, etiquetando directamente los píxeles de la imagen para extraer la zona central del texto y así definir los límites de las líneas. Para el preproceso de las líneas de texto, es decir, la normalización del texto antes del reconocimiento final, se han utilizado modelos similares a los mencionados para detectar la zona central del texto. Las imagenes se rescalan a una altura fija dando más importancia a esta zona central. Por último, en cuanto a reconocimiento de escritura manuscrita, se han combinado técnicas de ANNs y aprendizaje profundo con Modelos Ocultos de Markov, mejorando significativamente los resultados obtenidos previamente por nuestro motor de reconocimiento. La idoneidad de todos estos enfoques han sido testeados con diferentes corpus en cada una de las tareas tratadas., obtenieAvui en dia, les principals llibreries i arxius històrics estan invertint un esforç considerable en la digitalització de les seues col·leccions de documents. De fet, la majoria estan escanejant aquests documents i publicant únicament les imatges sense les seues transcripcions, fet que limita seriosament la possibilitat d'explotació d'aquests documents. Quan la transcripció del text és necessària, normalment aquesta és realitzada per experts de forma manual, la qual cosa és una tasca costosa i pot provocar errors. Si s'utilitzen sistemes de reconeixement automàtic es necessita la intervenció d'experts humans per a revisar i corregir l'eixida d'aquests motors de reconeixement. Per aquest motiu, és extremadament útil proporcionar eines interactives amb la finalitat de generar i corregir les transcripcions generades pels motors de reconeixement. Tot i que el reconeixement del text és l'objectiu final de l'Anàlisi de Documents, diversos passos previs (coneguts com preprocessament) són necessaris per a l'obtenció de transcripcions acurades a partir d'imatges digitalitzades. La neteja, millora i binarització de les imatges (si calen) són les primeres etapes prèvies al reconeixement. A més a més, els manuscrits històrics presenten una major dificultat d'analisi i preprocessament, perquè poden mostrar diversos tipus de degradacions, taques, tinta a través del paper i altres peculiaritats. Per tant, aquest tipus de documents requereixen mètodes de preprocessament més sofisticats. En alguns casos, fins i tot, es precisa de la supervisió d'experts per a garantir bons resultats en aquesta etapa. Una vegada que les imatges han sigut netejades, les diferents zones de la imatge han de ser localitzades: text, gràfics, dibuixos, decoracions, versals, etc. D'altra banda, també és important conéixer les relacions entre aquestes entitats i el text que contenen. Aquestes etapes del preprocessament són crítiques per al rendiment final del sistema, ja que els errors comesos en aquest moment es propagaran a la resta del procés de transcripció. L'objectiu principal del treball que estem presentant és millorar les principals etapes del procés de reconeixement, és a dir, des de les imatges escanejades fins a l'obtenció final de la transcripció del text. Els nostres esforços se centren en aplicar tècniques de Xarxes Neuronals (ANNs) i aprenentatge profund directament sobre les imatges de documents, amb la intenció d'extraure característiques adequades per a les diferents tasques analitzades: neteja i millora de documents, extracció de línies, normalització de línies de text i, finalment, transcripció. Com es pot apreciar, el treball realitzat aplica xicotetes millores en diferents etapes de l'Anàlisi de Documents, però també tracta d'abordar tasques més complexes: manuscrits històrics, o documents que presenten degradacions. Les ANNs i l'aprenentatge profund són un dels temes centrals d'aquesta tesi. Diferents models neuronals convolucionals s'han desenvolupat per a la neteja i millora de les dels documents. També s'han utilitzat models connexionistes per a la tasca d'extracció de línies: primer, per a detectar punts d'interés i segments de text i, agregar-los per a extraure les línies del document; i en segon lloc, etiquetant directament els pixels de la imatge per a extraure la zona central del text i així definir els límits de les línies. Per al preprocés de les línies de text, és a dir, la normalització del text abans del reconeixement final, s'han utilitzat models similars als utilitzats per a l'extracció de línies. Finalment, quant al reconeixement d'escriptura manuscrita, s'han combinat tècniques de ANNs i aprenentatge profund amb Models Ocults de Markov, que han millorat significativament els resultats obtinguts prèviament pel nostre motor de reconeixement. La idoneïtat de tots aquests enfocaments han sigut testejats amb diferents corpus en cadascuna de les tasques tractadPastor Pellicer, J. (2017). Neural Networks for Document Image and Text Processing [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90443TESI
    corecore