9 research outputs found

    Nonlinear Adaptive Diffusion Models for Image Denoising

    Full text link
    Most of digital image applications demand on high image quality. Unfortunately, images often are degraded by noise during the formation, transmission, and recording processes. Hence, image denoising is an essential processing step preceding visual and automated analyses. Image denoising methods can reduce image contrast, create block or ring artifacts in the process of denoising. In this dissertation, we develop high performance non-linear diffusion based image denoising methods, capable to preserve edges and maintain high visual quality. This is attained by different approaches: First, a nonlinear diffusion is presented with robust M-estimators as diffusivity functions. Secondly, the knowledge of textons derived from Local Binary Patterns (LBP) which unify divergent statistical and structural models of the region analysis is utilized to adjust the time step of diffusion process. Next, the role of nonlinear diffusion which is adaptive to the local context in the wavelet domain is investigated, and the stationary wavelet context based diffusion (SWCD) is developed for performing the iterative shrinkage. Finally, we develop a locally- and feature-adaptive diffusion (LFAD) method, where each image patch/region is diffused individually, and the diffusivity function is modified to incorporate the Inverse Difference Moment as a local estimate of the gradient. Experiments have been conducted to evaluate the performance of each of the developed method and compare it to the reference group and to the state-of-the-art methods

    An Automatic Learning of an Algerian Dialect Lexicon by using Multilingual Word Embeddings

    Get PDF
    International audienceThe goal of this work consists in building automatically from a social network (Youtube) an Algerian dialect lexicon. Each entry of this lexicon is composed by a word, written in Arabic script (modern standard Arabic or dialect) or Latin script (Arabizi, French or English). To each word, several transliterations are proposed, written in a script different from the one used for the word itself. To do that, we harvested and aligned an Algerian dialect corpus by using an iterative method based on multlingual word embeddings representation. The multlinguality in the corpus is due to the fact that Algerian people use several languages to post comments in social networks: Modern Standard Arabic (MSA), Algerian dialect, French and sometimes English. In addition, the users of social networks write freely without any regard to the grammar of these languages. We tested the proposed method on a test lexicon, it leads to a score of 73% in terms of F-measure

    Vehicle make and model recognition for intelligent transportation monitoring and surveillance.

    Get PDF
    Vehicle Make and Model Recognition (VMMR) has evolved into a significant subject of study due to its importance in numerous Intelligent Transportation Systems (ITS), such as autonomous navigation, traffic analysis, traffic surveillance and security systems. A highly accurate and real-time VMMR system significantly reduces the overhead cost of resources otherwise required. The VMMR problem is a multi-class classification task with a peculiar set of issues and challenges like multiplicity, inter- and intra-make ambiguity among various vehicles makes and models, which need to be solved in an efficient and reliable manner to achieve a highly robust VMMR system. In this dissertation, facing the growing importance of make and model recognition of vehicles, we present a VMMR system that provides very high accuracy rates and is robust to several challenges. We demonstrate that the VMMR problem can be addressed by locating discriminative parts where the most significant appearance variations occur in each category, and learning expressive appearance descriptors. Given these insights, we consider two data driven frameworks: a Multiple-Instance Learning-based (MIL) system using hand-crafted features and an extended application of deep neural networks using MIL. Our approach requires only image level class labels, and the discriminative parts of each target class are selected in a fully unsupervised manner without any use of part annotations or segmentation masks, which may be costly to obtain. This advantage makes our system more intelligent, scalable, and applicable to other fine-grained recognition tasks. We constructed a dataset with 291,752 images representing 9,170 different vehicles to validate and evaluate our approach. Experimental results demonstrate that the localization of parts and distinguishing their discriminative powers for categorization improve the performance of fine-grained categorization. Extensive experiments conducted using our approaches yield superior results for images that were occluded, under low illumination, partial camera views, or even non-frontal views, available in our real-world VMMR dataset. The approaches presented herewith provide a highly accurate VMMR system for rea-ltime applications in realistic environments.\\ We also validate our system with a significant application of VMMR to ITS that involves automated vehicular surveillance. We show that our application can provide law inforcement agencies with efficient tools to search for a specific vehicle type, make, or model, and to track the path of a given vehicle using the position of multiple cameras

    A Highly Efficient Biometrics Approach for Unconstrained Iris Segmentation and Recognition

    Get PDF
    This dissertation develops an innovative approach towards less-constrained iris biometrics. Two major contributions are made in this research endeavor: (1) Designed an award-winning segmentation algorithm in the less-constrained environment where image acquisition is made of subjects on the move and taken under visible lighting conditions, and (2) Developed a pioneering iris biometrics method coupling segmentation and recognition of the iris based on video of moving persons under different acquisitions scenarios. The first part of the dissertation introduces a robust and fast segmentation approach using still images contained in the UBIRIS (version 2) noisy iris database. The results show accuracy estimated at 98% when using 500 randomly selected images from the UBIRIS.v2 partial database, and estimated at 97% in a Noisy Iris Challenge Evaluation (NICE.I) in an international competition that involved 97 participants worldwide involving 35 countries, ranking this research group in sixth position. This accuracy is achieved with a processing speed nearing real time. The second part of this dissertation presents an innovative segmentation and recognition approach using video-based iris images. Following the segmentation stage which delineats the iris region through a novel segmentation strategy, some pioneering experiments on the recognition stage of the less-constrained video iris biometrics have been accomplished. In the video-based and less-constrained iris recognition, the test or subject iris videos/images and the enrolled iris images are acquired with different acquisition systems. In the matching step, the verification/identification result was accomplished by comparing the similarity distance of encoded signature from test images with each of the signature dataset from the enrolled iris images. With the improvements gained, the results proved to be highly accurate under the unconstrained environment which is more challenging. This has led to a false acceptance rate (FAR) of 0% and a false rejection rate (FRR) of 17.64% for 85 tested users with 305 test images from the video, which shows great promise and high practical implications for iris biometrics research and system design

    High-throughput phenotyping of yield parameters for modern grapevine breeding

    Get PDF
    Weinbau wird auf 1% der deutschen Agrarfläche betrieben. Auf dieser vergleichsweise kleinen Anbaufläche wird jedoch ein Drittel aller in der deutschen Landwirtschaft verwendeten Fungizide appliziert, was auf die Einführung von Schaderregern im 19. Jahrhundert zurück zu führen ist. Für einen nachhaltigen Anbau ist eine Reduktion des Pflanzenschutzmittelaufwands dringend notwendig. Dieses Ziel kann durch die Züchtung und den Anbau neuer, pilzwiderstandsfähiger Rebsorten erreicht werden. Die Rebenzüchtung als solche ist sehr zeitaufwendig, da die Entwicklung neuer Rebsorten 20 bis 25 Jahre dauert. Der Einsatz der markergestützten Selektion (MAS) erhöht die Effizienz der Selektion in der Rebenzüchtung fortwährend. Eine weitere Effizienzsteigerung ist mit der andauernden Verbesserung der Hochdurchsatz Genotypisierung zu erwarten. Im Vergleich zu den Methoden der Genotypisierung ist die Qualität, Objektivität und Präzision der traditionellen Phänotypisierungsmethoden begrenzt. Die Effizienz in der Rebenzüchtung soll mit der Entwicklung von Hochdurchsatz Methoden zur Phänotypisierung durch sensorgestützte Selektion weiter gesteigert werden. Hierfür sind bisher vielfältige Sensortechniken auf dem Markt verfügbar. Das Spektrum erstreckt sich von RGB-Kameras über Multispektral-, Hyperspektral-, Wärmebild- und Fluoreszenz- Kameras bis hin zu 3D-Techniken und Laserscananwendungen. Die Phänotypisierung von Pflanzen kann unter kontrollierten Bedingungen in Klimakammern oder Gewächshäusern beziehungsweise im Freiland stattfinden. Die Möglichkeit einer standardisierten Datenaufnahme nimmt jedoch kontinuierlich ab. Bei der Rebe als Dauerkultur erfolgt die Aufnahme äußerer Merkmale, mit Ausnahme junger Sämlinge, deshalb auch überwiegend im Freiland. Variierende Lichtverhältnisse, Ähnlichkeit von Vorder- und Hintergrund sowie Verdeckung des Merkmals stellen aus methodischer Sicht die wichtigsten Herausforderungen in der sensorgestützen Merkmalserfassung dar. Bis heute erfolgt die Aufnahme phänotypischer Merkmale im Feld durch visuelle Abschätzung. Hierbei werden die BBCH Skala oder die OIV Deskriptoren verwendet. Limitierende Faktoren dieser Methoden sind Zeit, Kosten und die Subjektivität bei der Datenerhebung. Innerhalb des Züchtungsprogramms kann daher nur ein reduziertes Set an Genotypen für ausgewählte Merkmale evaluiert werden. Die Automatisierung, Präzisierung und Objektivierung phänotypischer Daten soll dazu führen, dass (1) der bestehende Engpass an phänotypischen Methoden verringert, (2) die Effizienz der Rebenzüchtung gesteigert, und (3) die Grundlage zukünftiger genetischer Studien verbessert wird, sowie (4) eine Optimierung des weinbaulichen Managements stattfindet. Stabile und über die Jahre gleichbleibende Erträge sind für eine Produktion qualitativ hochwertiger Weine notwendig und spielen daher eine Schlüsselrolle in der Rebenzüchtung. Der Fokus dieser Studie liegt daher auf Ertragsmerkmalen wie der Beerengröße, Anzahl der Beeren pro Traube und Menge der Trauben pro Weinstock. Die verwandten Merkmale Traubenarchitektur und das Verhältnis von generativem und vegetativem Wachstum wurden zusätzlich bearbeitet. Die Beurteilung von Ertragsmerkmalen auf Einzelstockniveau ist aufgrund der genotypischen Varianz und der Vielfältigkeit des betrachteten Merkmals komplex und zeitintensiv. Als erster Schritt in Richtung Hochdurchsatz (HT) Phänotypisierung von Ertragsmerkmalen wurden zwei voll automatische Bildinterpretationsverfahren für die Anwendung im Labor entwickelt. Das Cluster Analysis Tool (CAT) ermöglicht die bildgestützte Erfassung der Traubenlänge, -breite und -kompaktheit, sowie der Beerengröße. Informationen über Anzahl, Größe (Länge, Breite) und das Volumen der einzelnen Beeren liefert das Berry Analysis Tool (BAT). Beide Programme ermöglichen eine gleichzeitige Erhebung mehrerer, präziser phänotypischer Merkmale und sind dabei schnell, benutzerfreundlich und kostengünstig. Die Möglichkeit, den Vorder- und Hintergrund in einem Freilandbild zu unterscheiden, ist besonders in einem frühen Entwicklungsstadium der Rebe aufgrund der fehlenden Laubwand schwierig. Eine Möglichkeit, die beiden Ebenen in der Bildanalyse zu trennen, ist daher unerlässlich. Es wurde eine berührungsfreie, schnelle sowie objektive Methode zur Bestimmung des Winterschnittholzgewichts, welches das vegetative Wachstum der Rebe beschreibt, entwickelt. In einem innovativen Ansatz wurde unter Kombination von Tiefenkarten und Bildsegmentierung die sichtbare Winterholzfläche im Bild bestimmt. Im Zuge dieser Arbeit wurde die erste HT Phänotypisierungspipeline für die Rebenzüchtung aufgebaut. Sie umfasst die automatisierte Bildaufnahme im Freiland unter Einsatz des PHENObots, das Datenmanagement mit Datenanalyse sowie die Interpretation des erhaltenen phänotypischen Datensatzes. Die Basis des PHENObots ist ein automatisiert gesteuertes Raupenfahrzeug. Des Weiteren umfasst er ein Multi-Kamera- System, ein RTK-GPS-System und einen Computer zur Datenspeicherung. Eine eigens entwickelte Software verbindet die Bilddaten mit der Standortreferenz. Diese Referenz wird anschließend für das Datenmanagement in einer Datenbank verwendet. Um die Funktionalität der Phänotypisierungspipeline zu demonstrieren, wurden die Merkmale Beerengröße und -farbe im Rebsortiment des Geilweilerhofes unter Verwendung des Berries In Vineyard (BIVcolor) Programms erfasst. Im Durschnitt werden 20 Sekunden pro Weinstock für die Bildaufnahme im Feld benötigt, gefolgt von der Extraktion der Merkmale mittels automatischer, objektiver und präziser Bildauswertung. Im Zuge dieses Versuches konnten mit dem PHENObot 2700 Weinstöcke in 12 Stunden erfasst werden, gefolgt von einer automatischen Bestimmung der Merkmale Beerengröße und -farbe aus den Bildern. Damit konnte die grundsätzliche Machbarkeit bewiesen werden. Diese Pilotpipeline bietet nun die Möglichkeit zur Entwicklung weiterer innovativer Programme zur Erhebung neuer Merkmale sowie die Integration zusätzlicher Sensoren auf dem PHENObot.Grapevine is grown on about 1% of the German agricultural area requiring one third of all fungicides sprayed due to pathogens being introduced within the 19th century. In spite of this requirement for viticulture a reduction is necessary to improve sustainability. This objective can be achieved by growing fungus resistant grapevine cultivars. The development of new cultivars, however, is very time-consuming, taking 20 to 25 years. In recent years the breeding process could be increased considerably by using marker assisted selection (MAS). Further improvements of MAS applications in grapevine breeding will come along with developing of faster and more cost efficient high-throughput (HT) genotyping methods.Complementary to genotyping techniques the quality, objectivity and precision of current phenotyping methods is limited and HT phenotyping methods need to be developed to further increase the efficiency of grapevine breeding through sensor assisted selection. Many different types of sensors technologies are available ranging from visible light sensors (Red Green Blue (RGB) cameras), multispectral, hyperspectral, thermal, and fluorescence cameras to three dimensional (3D) camera and laser scan approaches. Phenotyping can either be done under controlled environments (growth chamber, greenhouse) or can take place in the field, with a decreasing level of standardization. Except for young seedlings, grapevine as a perennial plant needs ultimately to be screened in the field. From a methodological point of view a variety of challenges need to be considered like the variable light conditions, the similarity of fore- and background, and in the canopy hidden traits.The assessment of phenotypic data in grapevine breeding is traditionally done directly in the field by visual estimations. In general the BBCH scale is used to acquire and classify the stages of annual plant development or OIV descriptors are applied to assess the phenotypes into classes. Phenotyping is strongly limited by time, costs and the subjectivity of records. Therefore, only a comparably small set of genotypes is evaluated for certain traits within the breeding process. Due to that limitation, automation, precision and objectivity of phenotypic data evaluation is crucial in order to (1) reduce the existing phenotyping bottleneck, (2) increase the efficiency of grapevine breeding, (3) assist further genetic studies and (4) ensure improved vineyard management. In this theses emphasis was put on the following aspects: Balanced and stable yields are important to ensure a high quality wine production playing a key role in grapevine breeding. Therefore, the main focus of this study is on phenotyping different parameters of yield such as berry size, number of berries per cluster, and number of clusters per vine. Additionally, related traits like cluster architecture and vine balance (relation between vegetative and generative growth) were considered. Quantifying yield parameters on a single vine level is challenging. Complex shapes and slight variations between genotypes make it difficult and very time-consuming.As a first step towards HT phenotyping of yield parameters two fully automatic image interpretation tools have been developed for an application under controlled laboratory conditions to assess individual yield parameters. Using the Cluster Analysis Tool (CAT) four important phenotypic traits can be detected in one image: Cluster length, cluster width, berry size and cluster compactness. The utilization of the Berry Analysis Tool (BAT) provides information on number, size (length and width), and volume of grapevine berries. Both tools offer a fast, user-friendly and cheap procedure to provide several precise phenotypic features of berries and clusters at once with dimensional units in a shorter period of time compared to manual measurements.The similarity of fore- and background in an image captured under field conditions is especially difficult and crucial for image analysis at an early grapevine developmental stage due to the missing canopy. To detect the dormant pruning wood weight, partly determining vine balance, a fast and non-invasive tool for objective data acquisition in the field was developed. In an innovative approach it combines depth map calculation and image segmentation to subtract the background of the vine obtaining the pruning area visible in the image. For the implementation of HT field phenotyping in grapevine breeding a phenotyping pipeline has been set up. It ranges from the automated image acquisition directly in the field using the PHENObot, to data management, data analysis and the interpretation of obtained phenotypic data for grapevine breeding aims. The PHENObot consists of an automated guided tracked vehicle system, a calibrated multi camera system, a Real-Time-Kinematic GPS system and a computer for image data handling. Particularly developed software was applied in order to acquire geo referenced images directly in the vineyard. The geo-reference is afterwards used for the post-processing data management in a database. As phenotypic traits to be analysed within the phenotyping pipeline the detection of berries and the determination of the berry size and colour were considered. The highthroughput phenotyping pipeline was tested in the grapevine repository at Geilweilerhof to extract the characteristics of berry size and berry colour using the Berries In Vineyards (BIVcolor) tool. Image data acquisition took about 20 seconds per vine, which afterwards was followed by the automatic image analysis to extract objective and precise phenotypic data. In was possible to capture images of 2700 vines within 12 hours using the PHENObot and subsequently automatic analysis of the images and extracting berry size and berry colour. With this analysis proof of principle was demonstrated. The pilot pipeline providesthe basis for further development of additional evaluation modules as well as the integration of other sensors
    corecore