417 research outputs found

    Computer vision reading on stickers and direct part marking on horticultural products : challenges and possible solutions

    Get PDF
    Traceability of products from production to the consumer has led to a technological advancement in product identification. There has been development from the use of traditional one-dimensional barcodes (EAN-13, Code 128, etc.) to 2D (two-dimensional) barcodes such as QR (Quick Response) and Data Matrix codes. Over the last two decades there has been an increased use of Radio Frequency Identification (RFID) and Direct Part Marking (DPM) using lasers for product identification in agriculture. However, in agriculture there are still considerable challenges to adopting barcodes, RFID and DPM technologies, unlike in industry where these technologies have been very successful. This study was divided into three main objectives. Firstly, determination of the effect of speed, dirt, moisture and bar width on barcode detection was carried out both in the laboratory and a flower producing company, Brandkamp GmbH. This study developed algorithms for automation and detection of Code 128 barcodes under rough production conditions. Secondly, investigations were carried out on the effect of low laser marking energy on barcode size, print growth, colour and contrast on decoding 2D Data Matrix codes printed directly on apples. Three different apple varieties (Golden Delicious, Kanzi and Red Jonaprince) were marked with various levels of energy and different barcode sizes. Image processing using Halcon 11.0.1 (MvTec) was used to evaluate the markings on the apples. Finally, the third objective was to evaluate both algorithms for 1D and 2D barcodes. According to the results, increasing the speed and angle of inclination of the barcode decreased barcode recognition. Also, increasing the dirt on the surface of the barcode resulted in decreasing the successful detection of those barcodes. However, there was 100% detection of the Code 128 barcode at the company’s production speed (0.15 m/s) with the proposed algorithm. Overall, the results from the company showed that the image-based system has a future prospect for automation in horticultural production systems. It overcomes the problem of using laser barcode readers. The results for apples showed that laser energy, barcode size, print growth, type of product, contrast between the markings and the colour of the products, the inertia of the laser system and the days of storage all singularly or in combination with each other influence the readability of laser Data Matrix codes and implementation on apples. There was poor detection of the Data Matrix code on Kanzi and Red Jonaprince due to the poor contrast between the markings on their skins. The proposed algorithm is currently working successfully on Golden Delicious with 100% detection for 10 days using energy 0.108 J mm-2 and a barcode size of 10 × 10 mm2. This shows that there is a future prospect of not only marking barcodes on apples but also on other agricultural products for real time production

    Digital watermarking and novel security devices

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Ein mobiler Serviceroboter zur Automatisierung der Probenahme und des Probenmanagements in einem biotechnologischen Pilotlabor

    Get PDF
    Scherer T. A mobile service robot for automisation of sample taking and sample management in a biotechnological pilot laboratory. Bielefeld (Germany): Bielefeld University; 2004.In biotechnologischen Laboratorien ist die Qualität der typischerweise pharmazeutischen Produkte ein wortwörtlich lebenswichtiges Ziel. Die Qualität der Zellkultivierungen wurde historisch nur durch off-line Messungen von physikalischen Prozessparametern wie pH und pO2 sichergestellt. Biologische Parameter wie die Zelldichte und -viabilität wurden nur off-line gemessen, weil das dazu notwendige Probenmanagement hochkomplizierte Manipulationen und Analysen beinhaltet und deshalb nicht automatisiert werden konnte. Es gibt zwar mehrere automatisierte Geräte, um einem Labortechniker zu assistieren, aber kein System, welches das gesamte Probenmanagement automatisiert. In dieser Arbeit wird ein neuer Typ von Serviceroboter präsentiert, der aus einem auf einer mobilen Plattform montierten Roboterarm besteht und diese Lücke schließt. Dieser Roboter muss eine ganze Reihe von Problemen bewältigen: Er muss seine Position im Labor bestimmen können (Lokalisation), er muss eine kollisionsfreie Bahn zu den beteiligten Geräten finden können (Bahnplanung mit Hindernisvermeidung), er darf bei seinen Bewegungen keine Menschen gefährden oder Laborausrüstung beschädigen (Kollisionsvermeidung), er muss die zu bedienenden Geräte erkennen und ihre Position präzise messen können (Bildverarbeitung), er muss sie bedienen können (Armsteuerung), er muss Objekte greifen können (Greifer und Finger) und er muss sie gefügig handhaben können, um sie nicht zu beschädigen (Kraftregelung). Er muss autonom sein, um nur die allernotwendigste Menge an Benutzereingriffen zu benötigen, und doch durch ein Laborsteuerprogramm kontrollierbar sein, um Eingriffe zu erlauben. Schließlich muss er einfach durch ungeschultes Personal zu warten sein. All diese Aspekte werden von dem in dieser Arbeit präsentierten neuen Robotersystem abgedeckt.In biotechnolgical laboratories, the quality of the typically pharmaceutical product is a literally life-important goal. Historically, the quality of the cell cultivations was ensured by on-line measurements of physical process parameters like pH and pO2 only. Biological parameters like cell density and viability were only measured off-line, because the necessary sample management involves highly complicated manipulations and analyses and could therefore not be automated. Various automated devices to assist a laboratory technician do exist, but so far no system to automate the entire sample management. In this work a novel type of service robot consisting of a robot arm mounted on a mobile platform is presented that closes this gap. This robot has to master a multitude of problems: It must be able to locate its position in the laboratory (localisation), it must be able to find a collision-free path to the involved devices (path planning with obstacle avoidance), it must not endanger humans or damage laboratory equipment while moving (collision avoidance), it must be able to recognize the devices to be manipulated and measure their precise position (computer vision), it must be able to manipulate them (arm control), it must be able to grasp objects (gripper and fingers) and it must be able to handle them with compliance in order to not damage them (force control). It must be autonomous in order to only require the least possible amount of user intervention, and yet controllable by a laboratory control program in order to allow intervention. Finally, it must be easily maintainable by non-expert personell. All these aspects are covered by the novel robot system presented in this thesis

    Computer Vision and Image Processing Techniques for Mobile Applications

    Get PDF
    Camera phones have penetrated every corner of society and have become a focal point for communications. In our research we extend the traditional use of such devices to help bridge the gap between physical and digital worlds. Their combined image acquisition, processing, storage, and communication capabilities in a compact, portable device make them an ideal platform for embedding computer vision and image processing capabilities in the pursuit of new mobile applications. This dissertation is presented as a series of computer vision and image processing techniques together with their applications on the mobile device. We have developed a set of techniques for ego-motion estimation, enhancement, feature extraction, perspective correction, object detection, and document retrieval that serve as a basis for such applications. Our applications include a dynamic video barcode that can transfer significant amounts of information visually, a document retrieval system that can retrieve documents from low resolution snapshots, and a series of applications for the users with visual disabilities such as a currency reader. Solutions for mobile devices require a fundamentally different approach than traditional vision techniques that run on traditional computers, so we consider user-device interaction and the fact that these algorithms must execute in a resource constrained environment. For each problem we perform both theoretical and empirical analysis in an attempt to optimize performance and usability. The thesis makes contributions related to efficient implementation of image processing and computer vision techniques, analysis of information theory, feature extraction and analysis of low quality images, and device usability

    Strain Mapping of Single Nanowires using Nano X-ray Diffraction

    Get PDF
    Nanowires are explored as basic components for a large range of electronic devices. The nanowire format offersseveral benefits, including reduced material consumption and increased potential for combining materials to formnew novel heterostructures. Several factors, such as mechanical stress from contacting or a lattice mismatch in aheterostructure, can strain and change the lattice tilt. The strain is often intertwined with small gradients ofcomposition. The strain relaxation can differ significantly from bulk due to the small diameters, but the mechanismsare not fully comprehended. X-rays have a penetrating power that makes it possible to investigate embeddedsamples without preparation or slicing. The high flux of coherent X-ray beams from synchrotron radiation facilities,combined with the nano-focus capabilities developed in recent years, have made it possible to probe nano-crystals.The 4th generation of synchrotrons, including MAX IV in Lund, Sweden, has even higher brilliance than previoussources. Diffraction imaging techniques using synchrotron radiation can reveal small strains down to 10-4-10-5. Thefield of coherent imaging pushes the limits of resolutions below the size of the focus. With Bragg ptychography, thedisplacement field in a crystal can be probed with resolution beyond the probe focus by numerically reconstructingthe phase.This thesis includes the development of X-ray nano-diffraction methods for the characterizing of nanowires, includingGaInP/InP barcode nanowires, p-i-n InP nanowire devices and metal halide perovskite CsPbBr3 nanowires. Itincludes a theoretical background of the scattering mechanisms in Thomson scattering in nano-crystals, goesthrough the formalism for coherent diffraction imaging, crystal structure and deformation in nanoobjects and thetechnical aspects of the experimental setup and measurement. Moreover, theoretical modelling of elastic strainrelaxation in these nanowires was performed with finite element modelling.Single III-V nanowire heterostructures and III-V nanowire devices were probed with scanning XRD and Braggprojection ptychography (BPP). How the techniques compare to each other and how the results are affected by thedifferent approximations that are made in the respective technique was explored. Finite element simulationscombined with nano-diffraction revealed that the lattice mismatch of 1.5% could be relaxed elastically for thediameter of 180 nm. From the strain mapping of the nanowire device, we found how the contacting of the nanowirebends the nanowire resulting in a tilt normal to the substrate.Single perovskite metal-halide perovskite CsPb(Br(1-x)Clx)3 nanowire heterostructures were characterized withscanning nano-XRD and XRF, which showed that the lattice spacing was affected by composition and strain.Composition gradients revealed that Cl diffusion had taken place within the heterostructure. Furthermore, extractingthe lattice tilts from shifts of the Bragg peak revealed a ferroelastic domain structure with simultaneously existinglattice tilts. These findings are beneficial for the further development of MHP nanowires devices

    Picture processing for enhancement and recognition

    Get PDF
    Recent years have been characterized by an incredible growth in computing power and storage capabilities, communication speed and bandwidth availability, either for desktop platform or mobile device. The combination of these factors have led to a new era of multimedia applications: browsing of huge image archives, consultation of online video databases, location based services and many other. Multimedia is almost everywhere and requires high quality data, easy retrieval of multimedia contents, increase in network access capacity and bandwidth per user. To meet all the mentioned requirements many efforts have to be made in various research areas, ranging from signal processing, image and video analysis, communication protocols, etc. The research activity developed during these three years concerns the field of multimedia signal processing, with particular attention to image and video analysis and processing. Two main topics have been faced: the first is relating to image and video reconstruction/restoration (using super resolution techniques) in web based application for multimedia contents' fruition; the second is relating to image analysis for location based systems in indoor scenario. The first topic is relating to image and video processing, in particular the focus has been put on the development of algorithm for super resolution reconstruction of image and video sequences in order to make easier the fruition of multimedia data over the web. On one hand, latest years have been characterized by an incredible proliferation and surprising success of user generated multimedia contents, and also distributed and collaborative multimedia database over the web. This brought to serious issues related to their management and maintenance: bandwidth limitation and service costs are important factors when dealing with mobile multimedia contents’ fruition. On the other hand, the current multimedia consumer market has been characterized by the advent of cheap but rather high-quality high definition displays. However, this trend is only partially supported by the deployment of high-resolution multimedia services, thus the resulting disparity between content and display formats have to be addressed and older productions need to be either re-mastered or postprocessed in order to be broadcasted for HD exploitation. In the presented scenario, superresolution reconstruction represents a major solution. Image or video super resolution techniques allow restoring the original spatial resolution from low-resolution compressed data. In this way, both content and service providers, not to tell the final users, are relieved from the burden of providing and supporting large multimedia data transfer. The second topic addressed during my Phd research activity is related to the implementation of an image based positioning system for an indoor navigator. As modern mobile device become faster, classical signal processing is suggested to be used for new applications, such location based service. The exponential growth of wearable devices, such as smartphone and PDA in general, equipped with embedded motion (accelerometers) and rotation (gyroscopes) sensors, Internet connection and high-resolution cameras makes it ideal for INS (Inertial Navigation System) applications aiming to support the localization/navigation of objects and/or users in an indoor environment where common localization systems, such as GPS (Global Positioning System), fail. Thus the need to use alternative positioning techniques. A series of intensive tests have been carried out, showing how modern signal processing techniques can be successfully applied in different scenarios, from image and video enhancement up to image recognition for localization purpose, providing low costs solutions and ensuring real-time performance

    Geometric, Semantic, and System-Level Scene Understanding for Improved Construction and Operation of the Built Environment

    Full text link
    Recent advances in robotics and enabling fields such as computer vision, deep learning, and low-latency data passing offer significant potential for developing efficient and low-cost solutions for improved construction and operation of the built environment. Examples of such potential solutions include the introduction of automation in environment monitoring, infrastructure inspections, asset management, and building performance analyses. In an effort to advance the fundamental computational building blocks for such applications, this dissertation explored three categories of scene understanding capabilities: 1) Localization and mapping for geometric scene understanding that enables a mobile agent (e.g., robot) to locate itself in an environment, map the geometry of the environment, and navigate through it; 2) Object recognition for semantic scene understanding that allows for automatic asset information extraction for asset tracking and resource management; 3) Distributed coupling analysis for system-level scene understanding that allows for discovery of interdependencies between different built-environment processes for system-level performance analyses and response-planning. First, this dissertation advanced Simultaneous Localization and Mapping (SLAM) techniques for convenient and low-cost locating capabilities compared with previous work. To provide a versatile Real-Time Location System (RTLS), an occupancy grid mapping enhanced visual SLAM (vSLAM) was developed to support path planning and continuous navigation that cannot be implemented directly on vSLAM’s original feature map. The system’s localization accuracy was experimentally evaluated with a set of visual landmarks. The achieved marker position measurement accuracy ranges from 0.039m to 0.186m, proving the method’s feasibility and applicability in providing real-time localization for a wide range of applications. In addition, a Self-Adaptive Feature Transform (SAFT) was proposed to improve such an RTLS’s robustness in challenging environments. As an example implementation, the SAFT descriptor was implemented with a learning-based descriptor and integrated into a vSLAM for experimentation. The evaluation results on two public datasets proved the feasibility and effectiveness of SAFT in improving the matching performance of learning-based descriptors for locating applications. Second, this dissertation explored vision-based 1D barcode marker extraction for automated object recognition and asset tracking that is more convenient and efficient than the traditional methods of using barcode or asset scanners. As an example application in inventory management, a 1D barcode extraction framework was designed to extract 1D barcodes from video scan of a built environment. The performance of the framework was evaluated with video scan data collected from an active logistics warehouse near Detroit Metropolitan Airport (DTW), demonstrating its applicability in automating inventory tracking and management applications. Finally, this dissertation explored distributed coupling analysis for understanding interdependencies between processes affecting the built environment and its occupants, allowing for accurate performance and response analyses compared with previous research. In this research, a Lightweight Communications and Marshalling (LCM)-based distributed coupling analysis framework and a message wrapper were designed. This proposed framework and message wrapper were tested with analysis models from wind engineering and structural engineering, where they demonstrated the abilities to link analysis models from different domains and reveal key interdependencies between the involved built-environment processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155042/1/lichaox_1.pd

    Interprétation visuelle de gestes pour l'interaction homme-machine

    Get PDF
    Nowadays, people want to interact with machines more naturally. One of the powerful communication channels is hand gesture. Vision-based approach has involved many researchers because this approach does not require any extra device. One of the key problems we need to resolve is hand posture recognition on RGB images because it can be used directly or integrated into a multi-cues hand gesture recognition. The main challenges of this problem are illumination differences, cluttered background, background changes, high intra-class variation, and high inter-class similarity. This thesis proposes a hand posture recognition system consists two phases that are hand detection and hand posture recognition. In hand detection step, we employed Viola-Jones detector with proposed concept Internal Haar-like feature. The proposed hand detection works in real-time within frames captured from real complex environments and avoids unexpected effects of background. The proposed detector outperforms original Viola-Jones detector using traditional Haar-like feature. In hand posture recognition step, we proposed a new hand representation based on a good generic descriptor that is kernel descriptor (KDES). When applying KDES into hand posture recognition, we proposed three improvements to make it more robust that are adaptive patch, normalization of gradient orientation in patches, and hand pyramid structure. The improvements make KDES invariant to scale change, patch-level feature invariant to rotation, and final hand representation suitable to hand structure. Based on these improvements, the proposed method obtains better results than original KDES and a state of the art method.Aujourd'hui, les utilisateurs souhaitent interagir plus naturellement avec les systèmes numériques. L'une des modalités de communication la plus naturelle pour l'homme est le geste de la main. Parmi les différentes approches que nous pouvons trouver dans la littérature, celle basée sur la vision est étudiée par de nombreux chercheurs car elle ne demande pas de porter de dispositif complémentaire. Pour que la machine puisse comprendre les gestes à partir des images RGB, la reconnaissance automatique de ces gestes est l'un des problèmes clés. Cependant, cette approche présente encore de multiples défis tels que le changement de point de vue, les différences d'éclairage, les problèmes de complexité ou de changement d'environnement. Cette thèse propose un système de reconnaissance de gestes statiques qui se compose de deux phases : la détection et la reconnaissance du geste lui-même. Dans l'étape de détection, nous utilisons un processus de détection d'objets de Viola Jones avec une caractérisation basée sur des caractéristiques internes d'Haar-like et un classifieur en cascade AdaBoost. Pour éviter l'influence du fond, nous avons introduit de nouvelles caractéristiques internes d'Haar-like. Ceci augmente de façon significative le taux de détection de la main par rapport à l'algorithme original. Pour la reconnaissance du geste, nous avons proposé une représentation de la main basée sur un noyau descripteur KDES (Kernel Descriptor) très efficace pour la classification d'objets. Cependant, ce descripteur n'est pas robuste au changement d'échelle et n'est pas invariant à l'orientation. Nous avons alors proposé trois améliorations pour surmonter ces problèmes : i) une normalisation de caractéristiques au niveau pixel pour qu'elles soient invariantes à la rotation ; ii) une génération adaptative de caractéristiques afin qu'elles soient robustes au changement d'échelle ; iii) une construction spatiale spécifique à la structure de la main au niveau image. Sur la base de ces améliorations, la méthode proposée obtient de meilleurs résultats par rapport au KDES initial et aux descripteurs existants. L'intégration de ces deux méthodes dans une application montre en situation réelle l'efficacité, l'utilité et la faisabilité de déployer un tel système pour l'interaction homme-robot utilisant les gestes de la main
    • …
    corecore