11 research outputs found

    Combining neural networks and clustering techniques for object recognition in indoor video sequences

    Get PDF
    This paper presents the results obtained in a real experiment for object recognition in a sequence of images captured by a mobile robot in an indoor environment. Objects are simply represented as an unstructured set of spots (image regions) for each frame, which are obtained from the result of an image segmentation algorithm applied on the whole sequence. In a previous work, neural networks were used to classify the spots independently as belonging to one of the objects of interest or the background from different spot features (color, size and invariant moments). In this work, clustering techniques are applied afterwards taking into account both the neural net outputs (class probabilities) and geometrical data (spot mass centers). In this way, context information is exploited to improve the classification performance. The experimental results of this combined approach are quite promising and better than the ones obtained using only the neural nets.Postprint (published version

    Neural Vascular Mechanism for the Cerebral Blood Flow Autoregulation after Hemorrhagic Stroke

    Get PDF

    Passive Electric Field Sensing for Ubiquitous and Environmental Perception

    Get PDF
    Electric Field Sensing plays an important role in the research branches of Environmental Perception as well as in Ubiquitous Computing. Environmental Perception aims to collect data of the surroundings, while Ubiquitous Computing has the objective of making computing available at any time. This includes the integration of sensors to perceive environmental influences in an unobtrusive way. Electric Field Sensing, also referenced as Capacitive Sensing, is an often used sensing modality in these research fields, for example, to detect the presence of persons or to locate touches and interactions on user interfaces. Electric Field Sensing has a number of advantages over other technologies, such as the fact that Capacitive Sensing does not require direct line-of-sight contact with the object being sensed and that the sensing system can be compact in design. These advantages facilitate high integrability and allow the collection of data as required in Environmental Perception, as well as the invisible incorporation into a user's environment, needed in Ubiquitous Computing. However, disadvantages are often attributed to Capacitive Sensing principles, such as a low sensing range of only a few centimeters and the generation of electric fields, which wastes energy and has several more problems concerning the implementation. As shown in this thesis, this only affects a subset of this sensing technology, namely the subcategory of active capacitive measurements. Therefore, this thesis focuses on the mainly open area of Passive Electric Field Sensing in the context of Ubiquitous Computing and Environmental Perception, as active Capacitive Sensing is an open research field which already gains a lot of attention. The thesis is divided into three main research questions. First, I address the question of whether and how Passive Electric Field Sensing can be made available in a cost-effective and simple manner. To this end, I present various techniques for reducing installation costs and simplifying the handling of these sensor systems. After the question of low-cost applicability, I examine for which applications passive electric field sensor technology is suitable at all. Therefore I present several fields of application where Passive Electric Field Sensing data can be collected. Taking into account the possible fields of application, this work is finally dedicated to the optimization of Passive Electric Field Sensing in these cases of application. For this purpose, different, already known signal processing methods are investigated for their application for Passive Electric Field sensor data. Furthermore, besides these software optimizations, hardware optimizations for the improved use of the technology are presented

    Table recognition in mathematical documents

    Get PDF
    While a number of techniques have been developed for table recognition in ordinary text documents, when dealing with tables in mathematical documents these techniques are often ineffective as tables containing mathematical structures can differ quite significantly from ordinary text tables. In fact, it is even difficult to clearly distinguish table recognition in mathematics from layout analysis of mathematical formulas. Again, it is not straight forward to adapt general layout analysis techniques for mathematical formulas. However, a reliable understanding of formula layout is often a necessary prerequisite to further semantic interpretation of the represented formulae. In this thesis, we present the necessary preprocessing steps towards a table recognition technique that specialises on tables in mathematical documents. It is based on our novel robust line recognition technique for mathematical expressions, which is fully independent of understanding the content or specialist fonts of expressions. We also present a graph representation for complex mathematical table structures. A set of rewriting rules applied to the graph allows for reliable re-composition of cells in order to identify several valid table interpretations. We demonstrate the effectiveness of our technique by applying them to a set of mathematical tables from standard text book that has been manually ground-truthed

    Contributions to information extraction for spanish written biomedical text

    Get PDF
    285 p.Healthcare practice and clinical research produce vast amounts of digitised, unstructured data in multiple languages that are currently underexploited, despite their potential applications in improving healthcare experiences, supporting trainee education, or enabling biomedical research, for example. To automatically transform those contents into relevant, structured information, advanced Natural Language Processing (NLP) mechanisms are required. In NLP, this task is known as Information Extraction. Our work takes place within this growing field of clinical NLP for the Spanish language, as we tackle three distinct problems. First, we compare several supervised machine learning approaches to the problem of sensitive data detection and classification. Specifically, we study the different approaches and their transferability in two corpora, one synthetic and the other authentic. Second, we present and evaluate UMLSmapper, a knowledge-intensive system for biomedical term identification based on the UMLS Metathesaurus. This system recognises and codifies terms without relying on annotated data nor external Named Entity Recognition tools. Although technically naive, it performs on par with more evolved systems, and does not exhibit a considerable deviation from other approaches that rely on oracle terms. Finally, we present and exploit a new corpus of real health records manually annotated with negation and uncertainty information: NUBes. This corpus is the basis for two sets of experiments, one on cue andscope detection, and the other on assertion classification. Throughout the thesis, we apply and compare techniques of varying levels of sophistication and novelty, which reflects the rapid advancement of the field

    Estimation of Distribution Algorithms and Minimum Relative Entropy

    Get PDF
    In the field of optimization using probabilistic models of the search space, this thesis identifies and elaborates several advancements in which the principles of maximum entropy and minimum relative entropy from information theory are used to estimate a probability distribution. The probability distribution within the search space is represented by a graphical model (factorization, Bayesian network or junction tree). An estimation of distribution algorithm (EDA) is an evolutionary optimization algorithm which uses a graphical model to sample a population within the search space and then estimates a new graphical model from the selected individuals of the population. - So far, the Factorized Distribution Algorithm (FDA) builds a factorization or Bayesian network from a given additive structure of the objective function to be optimized using a greedy algorithm which only considers a subset of the variable dependencies. Important connections can be lost by this method. This thesis presents a heuristic subfunction merge algorithm which is able to consider all dependencies between the variables (as long as the marginal distributions of the model do not become too large). On a 2-D grid structure, this algorithm builds a pentavariate factorization which allows to solve the deceptive grid benchmark problem with a much smaller population size than the conventional factorization. Especially for small population sizes, calculating large marginal distributions from smaller ones using Maximum Entropy and iterative proportional fitting leads to a further improvement. - The second topic is the generalization of graphical models to loopy structures. Using the Bethe-Kikuchi approximation, the loopy graphical model (region graph) can learn the Boltzmann distribution of an objective function by a generalized belief propagation algorithm (GBP). It minimizes the free energy, a notion adopted from statistical physics which is equivalent to the relative entropy to the Boltzmann distribution. Previous attempts to combine the Kikuchi approximation with EDA have relied on an expensive Gibbs sampling procedure for generating a population from this loopy probabilistic model. In this thesis a combination with a factorization is presented which allows more efficient sampling. The free energy is generalized to incorporate the inverse temperature ß. The factorization building algorithm mentioned above can be employed here, too. The dynamics of GBP is investigated, and the method is applied on Ising spin glass ground state search. Small instances (7 x 7) are solved without difficulty. Larger instances (10 x 10 and 15 x 15) do not converge to the true optimum with large ß, but sampling from the factorization can find the optimum with about 1000-10000 sampling attempts, depending on the instance. If GBP does not converge, it can be replaced by a concave-convex procedure which guarantees convergence. - Third, if no probabilistic structure is given for the objective function, a Bayesian network can be learned to capture the dependencies in the population. The relative entropy between the population-induced distribution and the Bayesian network distribution is equivalent to the log-likelihood of the model. The log-likelihood has been generalized to the BIC/MDL score which reduces overfitting by punishing complicated structure of the Bayesian network. A previous information theoretic analysis of BIC/MDL in the context of EDA is continued, and empiric evidence is given that the method is able to learn the correct structure of an objective function, given a sufficiently large population. - Finally, a way to reduce the search space of EDA is presented by combining it with a local search heuristics. The Kernighan Lin hillclimber, known originally for the traveling salesman problem and graph bipartitioning, is generalized to arbitrary binary problems. It can be applied in a stand-alone manner, as an iterative 1+1 search algorithm, or combined with EDA. On the MAXSAT problem it performs in a similar scale to the specialized SAT solver Walksat. An analysis of the Kernighan Lin local optima indicates that the combination with an EDA is favorable. The thesis shows how evolutionary optimization can be improved using interdisciplinary results from information theory, statistics, probability calculus and statistical physics. The principles of information theory for estimating probability distributions are applicable in many areas. EDAs are a good application because an improved estimation affects directly the optimization success.Estimation of Distribution Algorithms und Minimierung der relativen Entropie Im Bereich der Optimierung mit probabilistischen Modellen des Suchraums werden einige Fortschritte identifiziert und herausgearbeitet, in denen die Prinzipien der maximalen Entropie und der minimalen relativen Entropie aus der Informationstheorie verwendet werden, um eine Wahrscheinlichkeitsverteilung zu schätzen. Die Wahrscheinlichkeitsverteilung im Suchraum wird durch ein graphisches Modell beschrieben (Faktorisierung, Bayessches Netz oder Verbindungsbaum). Ein Estimation of Distribution Algorithm (EDA) ist ein evolutionärer Optimierungsalgorithmus, der mit Hilfe eines graphischen Modells eine Population im Suchraum erzeugt und dann anhand der selektierten Individuen dieser Population ein neues graphisches Modell erzeugt. - Bislang baut der Factorized Distribution Algorithm (FDA) eine Faktorisierung oder ein Bayessches Netz aus einer gegebenen additiven Struktur der Zielfunktion durch einen Greedy-Algorithmus, der nur einen Teil der Verbindungen zwischen den Variablen berücksichtigt. Wichtige verbindungen können durch diese Methode verloren gehen. Diese Arbeit stellt einen heuristischen Subfunktionenverschmelzungsalgorithmus vor, der in der Lage ist, alle Abhängigkeiten zwischen den Variablen zu berücksichtigen (wofern die Randverteilungen des Modells nicht zu groß werden). Auf einem 2D-Gitter erzeugt dieser Algorithmus eine pentavariate Faktorisierung, die es ermöglicht, das Deceptive-Grid-Testproblem mit viel kleinerer Populationsgröße zu lösen als mit der konventionellen Faktorisierung. Insbesondere für kleine Populationsgrößen kann das Ergebnis noch verbessert werden, wenn große Randverteilungen aus kleineren vermittels des Prinzips der maximalen Entropie und des Iterative Proportional Fitting- Algorithmus berechnet werden. - Das zweite Thema ist die Verallgemeinerung graphischer Modelle zu zirkulären Strukturen. Mit der Bethe-Kikuchi-Approximation kann das zirkuläre graphische Modell (der Regionen-Graph) die Boltzmannverteilung einer Zielfunktion durch einen generalisierten Belief Propagation-Algorithmus (GBP) lernen. Er minimiert die freie Energie, eine Größe aus der statistischen Physik, die äquivalent zur relativen Entropie zur Boltzmannverteilung ist. Frühere Versuche, die Kikuchi-Approximation mit EDA zu verbinden, benutzen einen aufwendigen Gibbs-Sampling-Algorithmus, um eine Population aus dem zirkulären Wahrscheinlichkeitsmodell zu erzeugen. In dieser Arbeit wird eine Verbindung mit Faktorisierungen vorgestellt, die effizienteres Sampling erlaubt. Die freie Energie wird um die inverse Temperatur ß erweitert. Der oben erwähnte Algorithmus zur Erzeugung einer Faktorisierung kann auch hier angewendet werden. Die Dynamik von GBP wird untersucht und auf Ising-Modelle angewendet. Kleine Probleme (7 x 7) werden ohne Schwierigkeit gelöst. Größere Probleme (10 x 10 und 15 x 15) konvergieren mit großem ß nicht mehr zum wahren Optimum, aber durch Sampling von der Faktorisierung kann das Optimum bei einer Samplegröße von 1000 bis 10000, je nach Probleminstanz, gefunden werden. Wenn GBP nicht konvergiert, kann es durch eine Konkav-Konvex-Prozedur ersetzt werden, die Konvergenz garantiert. - Drittens kann, wenn für die Zielfunktion keine Struktur gegeben ist, ein Bayessches Netz gelernt werden, um die Abhängigkeiten in der Population zu erfassen. Die relative Entropie zwischen der Populationsverteilung und der Verteilung durch das Bayessche Netz ist äquivalent zur Log-Likelihood des Modells. Diese wurde erweitert zum BIC/MDL-Kriterium, das Überanpassung lindert, indem komplizierte Strukturen bestraft werden. Eine vorangegangene informationstheoretische Analyse von BIC/MDL im EDA-Bereich wird erweitert, und empirisch wird belegt, daß die Methode die korrekte Struktur einer Zielfunktion bei genügend großer Population lernen kann. - Schließlich wird vorgestellt, wie durch eine lokale Suchheuristik der Suchraum von EDA reduziert werden kann. Der Kernighan-Lin-Hillclimber, der ursprünglich für das Problem des Handlungsreisenden und Graphen-Bipartitionierung konzipiert ist, wird für beliebige binäre Probleme erweitert. Er kann allein angewandt werden, als iteratives 1+1-Suchverfahren, oder in Kombination mit EDA. Er löst das MAXSAT-Problem in ähnlicher Größenordnung wie der spezialisierte Hillclimber Walksat. Eine Analyse der lokalen Optima von Kernighan-Lin zeigt, daß die Kombination mit EDA vorteilhaft ist. Die Arbeit zeigt, wie evolutionäre Optimierung verbessert werden kann, indem interdisziplinäre Ergebnisse aus Informationstheorie, Statistik, Wahrscheinlichkeitsrechnung und statistischer Physik eingebracht werden. Die Prinzipien der Informationstheorie zur Schätzung von Wahrscheinlichkeitsverteilungen lassen sich in vielen Bereichen anwenden. EDAs sind eine gute Anwendung, denn eine verbesserte Schätzung beeinflußt direkt den Optimierungserfolg

    Image Classification of High Variant Objects in Fast Industrial Applications

    Get PDF
    Recent advances in machine learning and image processing have expanded the applications of computer vision in many industries. In industrial applications, image classification is a crucial task since high variant objects present difficult problems because of their variety and constant change in attributes. Computer vision algorithms can function effectively in complex environments, working alongside human operators to enhance efficiency and data accuracy. However, there are still many industries facing difficulties with automation that have not yet been properly solved and put into practice. They have the need for more accurate, convenient, and faster methods. These solutions drove my interest in combining multiple learning strategies as well as sensors and image formats to enable the use of computer vision for these applications. The motivation for this work is to answer a number of research questions that aim to mitigate current problems in hinder their practical application. This work therefore aims to present solutions that contribute to enabling these solutions. I demonstrate why standard methods cannot simply be applied to an existing problem. Each method must be customized to the specific application scenario in order to obtain a working solution. One example is face recognition where the classification performance is crucial for the system’s ability to correctly identify individuals. Additional features would allow higher accuracy, robustness, safety, and make presentation attacks more difficult. The detection of attempted attacks is critical for the acceptance of such systems and significantly impacts the applicability of biometrics. Another application is tailgating detection at automated entrance gates. Especially in high security environments it is important to prevent that authorized persons can take an unauthorized person into the secured area. There is a plethora of technology that seem potentially suitable but there are several practical factors to consider that increase or decrease applicability depending which method is used. The third application covered in this thesis is the classification of textiles when they are not spread out. Finding certain properties on them is complex, as these properties might be inside a fold, or differ in appearance because of shadows and position. The first part of this work provides in-depth analysis of the three individual applications, including background information that is needed to understand the research topic and its proposed solutions. It includes the state of the art in the area for all researched applications. In the second part of this work, methods are presented to facilitate or enable the industrial applicability of the presented applications. New image databases are initially presented for all three application areas. In the case of biometrics, three methods that identify and improve specific performance parameters are shown. It will be shown how melanin face pigmentation (MFP) features can be extracted and used for classification in face recognition and PAD applications. In the entrance control application, the focus is on the sensor information with six methods being presented in detail. This includes the use of thermal images to detect humans based on their body heat, depth images in form of RGB-D images and 2D image series, as well as data of a floor mounted sensor-grid. For textile defect detection several methods and a novel classification procedure, in free-fall is presented. In summary, this work examines computer vision applications for their practical industrial applicability and presents solutions to mitigate the identified problems. In contrast to previous work, the proposed approaches are (a) effective in improving classification performance (b) fast in execution and (c) easily integrated into existing processes and equipment
    corecore