218 research outputs found

    Process analytical technology in food biotechnology

    Get PDF
    Biotechnology is an area where precision and reproducibility are vital. This is due to the fact that products are often in form of food, pharmaceutical or cosmetic products and therefore very close to the human being. To avoid human error during the production or the evaluation of the quality of a product and to increase the optimal utilization of raw materials, a very high amount of automation is desired. Tools in the food and chemical industry that aim to reach this degree of higher automation are summarized in an initiative called Process Analytical Technology (PAT). Within the scope of the PAT, is to provide new measurement technologies for the purpose of closed loop control in biotechnological processes. These processes are the most demanding processes in regards of control issues due to their very often biological rate-determining component. Most important for an automation attempt is deep process knowledge, which can only be achieved via appropriate measurements. These measurements can either be carried out directly, measuring a crucial physical value, or if not accessible either due to the lack of technology or a complicated sample state, via a soft-sensor.Even after several years the ideal aim of the PAT initiative is not fully implemented in the industry and in many production processes. On the one hand a lot effort still needs to be put into the development of more general algorithms which are more easy to implement and especially more reliable. On the other hand, not all the available advances in this field are employed yet. The potential users seem to stick to approved methods and show certain reservations towards new technologies.Die Biotechnologie ist ein Wissenschaftsbereich, in dem hohe Genauigkeit und Wiederholbarkeit eine wichtige Rolle spielen. Dies ist der Tatsache geschuldet, dass die hergestellten Produkte sehr oft den Bereichen Nahrungsmitteln, Pharmazeutika oder Kosmetik angehöhren und daher besonders den Menschen beeinflussen. Um den menschlichen Fehler bei der Produktion zu vermeiden, die Qualität eines Produktes zu sichern und die optimale Verwertung der Rohmaterialen zu gewährleisten, wird ein besonders hohes Maß an Automation angestrebt. Die Werkzeuge, die in der Nahrungsmittel- und chemischen Industrie hierfür zum Einsatz kommen, werden in der Process Analytical Technology (PAT) Initiative zusammengefasst. Ziel der PAT ist die Entwicklung zuverlässiger neuer Methoden, um Prozesse zu beschreiben und eine automatische Regelungsstrategie zu realisieren. Biotechnologische Prozesse gehören hierbei zu den aufwändigsten Regelungsaufgaben, da in den meisten Fällen eine biologische Komponente der entscheidende Faktor ist. Entscheidend für eine erfolgreiche Regelungsstrategie ist ein hohes Maß an Prozessverständnis. Dieses kann entweder durch eine direkte Messung der entscheidenden physikalischen, chemischen oder biologischen Größen gewonnen werden oder durch einen SoftSensor. Zusammengefasst zeigt sich, dass das finale Ziel der PAT Initiative auch nach einigen Jahren des Propagierens weder komplett in der Industrie noch bei vielen Produktionsprozessen angekommen ist. Auf der einen Seite liegt dies mit Sicherheit an der Tatsache, dass noch viel Arbeit in die Generalisierung von Algorithmen gesteckt werden muss. Diese müsse einfacher zu implementieren und vor allem noch zuverlässiger in der Funktionsweise sein. Auf der anderen Seite wurden jedoch auch Algorithmen, Regelungsstrategien und eigne Ansätze für einen neuartigen Sensor sowie einen Soft-Sensors vorgestellt, die großes Potential zeigen. Nicht zuletzt müssen die möglichen Anwender neue Strategien einsetzen und Vorbehalte gegenüber unbekannten Technologien ablegen

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Vehicle Tracking in Occlusion and Clutter

    Get PDF
    Vehicle tracking in environments containing occlusion and clutter is an active research area. The problem of tracking vehicles through such environments presents a variety of challenges. These challenges include vehicle track initialization, tracking an unknown number of targets and the variations in real-world lighting, scene conditions and camera vantage. Scene clutter and target occlusion present additional challenges. A stochastic framework is proposed which allows for vehicles tracks to be identified from a sequence of images. The work focuses on the identification of vehicle tracks present in transportation scenes, namely, vehicle movements at intersections. The framework combines background subtraction and motion history based approaches to deal with the segmentation problem. The tracking problem is solved using a Monte Carlo Markov Chain Data Association (MCMCDA) method. The method includes a novel concept of including the notion of discrete, independent regions in the MCMC scoring function. Results are presented which show that the framework is capable of tracking vehicles in scenes containing multiple vehicles that occlude one another, and that are occluded by foreground scene objects

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Morphologie, Géométrie et Statistiques en imagerie non-standard

    Get PDF
    Digital image processing has followed the evolution of electronic and computer science. It is now current to deal with images valued not in {0,1} or in gray-scale, but in manifolds or probability distributions. This is for instance the case for color images or in diffusion tensor imaging (DTI). Each kind of images has its own algebraic, topological and geometric properties. Thus, existing image processing techniques have to be adapted when applied to new imaging modalities. When dealing with new kind of value spaces, former operators can rarely be used as they are. Even if the underlying notion has still a meaning, a work must be carried out in order to express it in the new context.The thesis is composed of two independent parts. The first one, "Mathematical morphology on non-standard images", concerns the extension of mathematical morphology to specific cases where the value space of the image does not have a canonical order structure. Chapter 2 formalizes and demonstrates the irregularity issue of total orders in metric spaces. The main results states that for any total order in a multidimensional vector space, there are images for which the morphological dilations and erosions are irregular and inconsistent. Chapter 3 is an attempt to generalize morphology to images valued in a set of unordered labels.The second part "Probability density estimation on Riemannian spaces" concerns the adaptation of standard density estimation techniques to specific Riemannian manifolds. Chapter 5 is a work on color image histograms under perceptual metrics. The main idea of this chapter consists in computing histograms using local Euclidean approximations of the perceptual metric, and not a global Euclidean approximation as in standard perceptual color spaces. Chapter 6 addresses the problem of non parametric density estimation when data lay in spaces of Gaussian laws. Different techniques are studied, an expression of kernels is provided for the Wasserstein metric.Le traitement d'images numériques a suivi l'évolution de l'électronique et de l'informatique. Il est maintenant courant de manipuler des images à valeur non pas dans {0,1}, mais dans des variétés ou des distributions de probabilités. C'est le cas par exemple des images couleurs où de l'imagerie du tenseur de diffusion (DTI). Chaque type d'image possède ses propres structures algébriques, topologiques et géométriques. Ainsi, les techniques existantes de traitement d'image doivent être adaptés lorsqu'elles sont appliquées à de nouvelles modalités d'imagerie. Lorsque l'on manipule de nouveaux types d'espaces de valeurs, les précédents opérateurs peuvent rarement être utilisés tel quel. Même si les notions sous-jacentes ont encore un sens, un travail doit être mené afin de les exprimer dans le nouveau contexte. Cette thèse est composée de deux parties indépendantes. La première, « Morphologie mathématiques pour les images non standards », concerne l'extension de la morphologie mathématique à des cas particuliers où l'espace des valeurs de l'image ne possède pas de structure d'ordre canonique. Le chapitre 2 formalise et démontre le problème de l'irrégularité des ordres totaux dans les espaces métriques. Le résultat principal de ce chapitre montre qu'étant donné un ordre total dans un espace vectoriel multidimensionnel, il existe toujours des images à valeur dans cet espace tel que les dilatations et les érosions morphologiques soient irrégulières et incohérentes. Le chapitre 3 est une tentative d'extension de la morphologie mathématique aux images à valeur dans un ensemble de labels non ordonnés.La deuxième partie de la thèse, « Estimation de densités de probabilités dans les espaces de Riemann » concerne l'adaptation des techniques classiques d'estimation de densités non paramétriques à certaines variétés Riemanniennes. Le chapitre 5 est un travail sur les histogrammes d'images couleurs dans le cadre de métriques perceptuelles. L'idée principale de ce chapitre consiste à calculer les histogrammes suivant une approximation euclidienne local de la métrique perceptuelle, et non une approximation globale comme dans les espaces perceptuels standards. Le chapitre 6 est une étude sur l'estimation de densité lorsque les données sont des lois Gaussiennes. Différentes techniques y sont analysées. Le résultat principal est l'expression de noyaux pour la métrique de Wasserstein

    Segmentation of neuroanatomy in magnetic resonance images

    Get PDF
    Segmentation in neurological Magnetic Resonance Imaging (MRI) is necessary for volume measurement, feature extraction and for the three-dimensional display of neuroanatomy. This thesis proposes several automated and semi-automated methods which offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. Work has concentrated on the use of dual echo multi-slice spin-echo data sets in order to take advantage of the intrinsically multi-parametric nature of MRI. Such data is widely acquired clinically and segmentation therefore does not require additional scans. The literature has been reviewed. Factors affecting image non-uniformity for a modem 1.5 Tesla imager have been investigated. These investigations demonstrate that a robust, fast, automatic three-dimensional non-uniformity correction may be applied to data as a pre-processing step. The merit of using an anisotropic smoothing method for noisy data has been demonstrated. Several approaches to neurological MRI segmentation have been developed. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing, two threshold based techniques and a fast radial CSF identification approach are proposed to identify the intracranial region contour in each slice of the data set. Once isolated, the intracranial region is further processed to identify CSF, and, depending upon the MRI pulse sequence used, the brain itself may be sub-divided into grey matter and white matter using semiautomatic contrast enhancement and clustering methods. The segmentation of Multiple Sclerosis (MS) plaques has also been considered. The utility of the stack, a data driven multi-resolution approach to segmentation, has been investigated, and several improvements to the method suggested. The factors affecting the intrinsic accuracy of neurological volume measurement in MRI have been studied and their magnitudes determined for spin-echo imaging. Geometric distortion - both object dependent and object independent - has been considered, as well as slice warp, slice profile, slice position and the partial volume effect. Finally, the accuracy of the approaches to segmentation developed in this thesis have been evaluated. Intracranial volume measurements are within 5% of expert observers' measurements, white matter volumes within 10%, and CSF volumes consistently lower than the expert observers' measurements due to the observers' inability to take the partial volume effect into account

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced data sets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present work introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images

    New Methods to Improve Large-Scale Microscopy Image Analysis with Prior Knowledge and Uncertainty

    Get PDF
    Multidimensional imaging techniques provide powerful ways to examine various kinds of scientific questions. The routinely produced datasets in the terabyte-range, however, can hardly be analyzed manually and require an extensive use of automated image analysis. The present thesis introduces a new concept for the estimation and propagation of uncertainty involved in image analysis operators and new segmentation algorithms that are suitable for terabyte-scale analyses of 3D+t microscopy images.Comment: 218 pages, 58 figures, PhD thesis, Department of Mechanical Engineering, Karlsruhe Institute of Technology, published online with KITopen (License: CC BY-SA 3.0, http://dx.doi.org/10.5445/IR/1000057821
    • …
    corecore