28 research outputs found

    An Efficient Codebook Initialization Approach for LBG Algorithm

    Full text link
    In VQ based image compression technique has three major steps namely (i) Codebook Design, (ii) VQ Encoding Process and (iii) VQ Decoding Process. The performance of VQ based image compression technique depends upon the constructed codebook. A widely used technique for VQ codebook design is the Linde-Buzo-Gray (LBG) algorithm. However the performance of the standard LBG algorithm is highly dependent on the choice of the initial codebook. In this paper, we have proposed a simple and very effective approach for codebook initialization for LBG algorithm. The simulation results show that the proposed scheme is computationally efficient and gives expected performance as compared to the standard LBG algorithm

    Graph-based word spotting by inexact matching techniques

    Get PDF
    Al llarg d'aquest projecte s'ha desenvolupat un nou mètode de word spotting (localització de paraules) en què es té molt en compte l'estructura de les paraules a buscar. Aquestes tècniques consisteixen a trobar paraules escrites a mà, a partir d'un exemple. La tècnica presentada s'ha desenvolupat per utilitzar-la en documents antics. Seguidament, es presenta una indexació per tal d'accelerar el procés de cerca. Aquesta indexació consisteix a trobar ràpidament un conjunt de candidats on aplicar tècniques de word spotting en grans col·leccions de documents. Finalment, es mostra un exemple d'aplicació de les tècniques desenvolupades en una aplicació per a dispositius Android.A lo largo del proyecto se ha desarrollado un nuevo método de word spotting (localización de palabras) en el cual se tiene muy en consideración la estructura de las palabras a buscar. Estas técnicas consisten en encontrar palabras escritas a mano partiendo de un ejemplo. La técnica presentada se ha desarrollado utilizándola en documentos antiguos. Seguidamente, se presenta una indexación con el objetivo de acelerar el proceso de búsqueda. Esta indexación consiste en encontrar rápidamente un conjunto de candidatos donde aplicar técnicas de word spotting en grandes colecciones de documentos. Finalmente, se muestra un ejemplo de aplicación de la técnica desarrollada en una aplicación para dispositivos Android.Along this project a new method for word spotting (location of words) has been developed. This method has in mind the structure of the words to search. These techniques consist in finding handwritten words from a given example. The presented technique has been meant to be used in old documents. Afterwards an indexation process is presented to speed up the search step. This indexation is used to find a set of candidates in large document collections in order to apply word spotting techniques. Finally, an example application of the developed techniques is proposed for Android devices

    Data fusion by using machine learning and computational intelligence techniques for medical image analysis and classification

    Get PDF
    Data fusion is the process of integrating information from multiple sources to produce specific, comprehensive, unified data about an entity. Data fusion is categorized as low level, feature level and decision level. This research is focused on both investigating and developing feature- and decision-level data fusion for automated image analysis and classification. The common procedure for solving these problems can be described as: 1) process image for region of interest\u27 detection, 2) extract features from the region of interest and 3) create learning model based on the feature data. Image processing techniques were performed using edge detection, a histogram threshold and a color drop algorithm to determine the region of interest. The extracted features were low-level features, including textual, color and symmetrical features. For image analysis and classification, feature- and decision-level data fusion techniques are investigated for model learning using and integrating computational intelligence and machine learning techniques. These techniques include artificial neural networks, evolutionary algorithms, particle swarm optimization, decision tree, clustering algorithms, fuzzy logic inference, and voting algorithms. This work presents both the investigation and development of data fusion techniques for the application areas of dermoscopy skin lesion discrimination, content-based image retrieval, and graphic image type classification --Abstract, page v

    Multibiometric security in wireless communication systems

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/08/2010.This thesis has aimed to explore an application of Multibiometrics to secured wireless communications. The medium of study for this purpose included Wi-Fi, 3G, and WiMAX, over which simulations and experimental studies were carried out to assess the performance. In specific, restriction of access to authorized users only is provided by a technique referred to hereafter as multibiometric cryptosystem. In brief, the system is built upon a complete challenge/response methodology in order to obtain a high level of security on the basis of user identification by fingerprint and further confirmation by verification of the user through text-dependent speaker recognition. First is the enrolment phase by which the database of watermarked fingerprints with memorable texts along with the voice features, based on the same texts, is created by sending them to the server through wireless channel. Later is the verification stage at which claimed users, ones who claim are genuine, are verified against the database, and it consists of five steps. Initially faced by the identification level, one is asked to first present one’s fingerprint and a memorable word, former is watermarked into latter, in order for system to authenticate the fingerprint and verify the validity of it by retrieving the challenge for accepted user. The following three steps then involve speaker recognition including the user responding to the challenge by text-dependent voice, server authenticating the response, and finally server accepting/rejecting the user. In order to implement fingerprint watermarking, i.e. incorporating the memorable word as a watermark message into the fingerprint image, an algorithm of five steps has been developed. The first three novel steps having to do with the fingerprint image enhancement (CLAHE with 'Clip Limit', standard deviation analysis and sliding neighborhood) have been followed with further two steps for embedding, and extracting the watermark into the enhanced fingerprint image utilising Discrete Wavelet Transform (DWT). In the speaker recognition stage, the limitations of this technique in wireless communication have been addressed by sending voice feature (cepstral coefficients) instead of raw sample. This scheme is to reap the advantages of reducing the transmission time and dependency of the data on communication channel, together with no loss of packet. Finally, the obtained results have verified the claims

    ACDC: Automated Cell Detection and Counting for Time-Lapse Fluorescence Microscopy.

    Get PDF
    Advances in microscopy imaging technologies have enabled the visualization of live-cell dynamic processes using time-lapse microscopy imaging. However, modern methods exhibit several limitations related to the training phases and to time constraints, hindering their application in the laboratory practice. In this work, we present a novel method, named Automated Cell Detection and Counting (ACDC), designed for activity detection of fluorescent labeled cell nuclei in time-lapse microscopy. ACDC overcomes the limitations of the literature methods, by first applying bilateral filtering on the original image to smooth the input cell images while preserving edge sharpness, and then by exploiting the watershed transform and morphological filtering. Moreover, ACDC represents a feasible solution for the laboratory practice, as it can leverage multi-core architectures in computer clusters to efficiently handle large-scale imaging datasets. Indeed, our Parent-Workers implementation of ACDC allows to obtain up to a 3.7× speed-up compared to the sequential counterpart. ACDC was tested on two distinct cell imaging datasets to assess its accuracy and effectiveness on images with different characteristics. We achieved an accurate cell-count and nuclei segmentation without relying on large-scale annotated datasets, a result confirmed by the average Dice Similarity Coefficients of 76.84 and 88.64 and the Pearson coefficients of 0.99 and 0.96, calculated against the manual cell counting, on the two tested datasets

    Investigating Polynomial Fitting Schemes for Image Compression

    Get PDF
    Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration

    Information embedding and retrieval in 3D printed objects

    Get PDF
    Deep learning and convolutional neural networks have become the main tools of computer vision. These techniques are good at using supervised learning to learn complex representations from data. In particular, under limited settings, the image recognition model now performs better than the human baseline. However, computer vision science aims to build machines that can see. It requires the model to be able to extract more valuable information from images and videos than recognition. Generally, it is much more challenging to apply these deep learning models from recognition to other problems in computer vision. This thesis presents end-to-end deep learning architectures for a new computer vision field: watermark retrieval from 3D printed objects. As it is a new area, there is no state-of-the-art on many challenging benchmarks. Hence, we first define the problems and introduce the traditional approach, Local Binary Pattern method, to set our baseline for further study. Our neural networks seem useful but straightfor- ward, which outperform traditional approaches. What is more, these networks have good generalization. However, because our research field is new, the problems we face are not only various unpredictable parameters but also limited and low-quality training data. To address this, we make two observations: (i) we do not need to learn everything from scratch, we know a lot about the image segmentation area, and (ii) we cannot know everything from data, our models should be aware what key features they should learn. This thesis explores these ideas and even explore more. We show how to use end-to-end deep learning models to learn to retrieve watermark bumps and tackle covariates from a few training images data. Secondly, we introduce ideas from synthetic image data and domain randomization to augment training data and understand various covariates that may affect retrieve real-world 3D watermark bumps. We also show how the illumination in synthetic images data to effect and even improve retrieval accuracy for real-world recognization applications

    On the automatic detection of otolith features for fish species identification and their age estimation

    Get PDF
    This thesis deals with the automatic detection of features in signals, either extracted from photographs or captured by means of electronic sensors, and its possible application in the detection of morphological structures in fish otoliths so as to identify species and estimate their age at death. From a more biological perspective, otoliths, which are calcified structures located in the auditory system of all teleostean fish, constitute one of the main elements employed in the study and management of marine ecology. In this sense, the application of Fourier descriptors to otolith images, combined with component analysis, is habitually a first and a key step towards characterizing their morphology and identifying fish species. However, some of the main limitations arise from the poor interpretation that can be obtained with this representation and the use that is made of the coefficients, as generally they are selected manually for classification purposes, both in quantity and representativity. The automatic detection of irregularities in signals, and their interpretation, was first addressed in the so-called Best-Basis paradigm. In this sense, Saito's Local discriminant Bases algorithm (LDB) uses the Discrete Wavelet Packet Transform (DWPT) as the main descriptive tool for positioning the irregularities in the time-frequency space, and an energy-based discriminant measure to guide the automatic search of relevant features in this domain. Current density-based proposals have tried to overcome the limitations of the energy-based functions with relatively little success. However, other measure strategies more consistent with the true classification capability, and which can provide generalization while reducing the dimensionality of features, are yet to be developed. The proposal of this work focuses on a new framework for one-dimensional signals. An important conclusion extracted therein is that such generalization involves a mesure system of bounded values representing the density where no class overlaps. This determines severely the selection of features and the vector size that is needed for proper class identification, which must be implemented not only based on global discriminant values but also on the complementary information regarding the provision of samples in the domain. The new tools have been used in the biological study of different hake species, yielding good classification results. However, a major contribution lies on the further interpretation of features the tool performs, including the structure of irregularities, time-frequency position, extension support and degree of importance, which is highlighted automatically on the same images or signals. As for aging applications, a new demodulation strategy for compensating the nonlinear growth effect on the intensity profile has been developed. Although the method is, in principle, able to adapt automatically to the specific growth of individual specimens, preliminary results with LDB-based techniques suggest to study the effect of lighting conditions on the otoliths in order to design more reliable techniques for reducing image contrast variation. In the meantime, a new theoretic framework for otolith-based fish age estimation has been presented. This theory suggests that if the true fish growth curve is known, the regular periodicity of age structures in the demodulated profile is related to the radial length the original intensity profile is extracted from. Therefore, if this periodicity can be measured, it is possible to infer the exact fish age omitting feature extractors and classifiers. This could have important implications in the use of computational resources anc current aging approaches.El eje principal de esta tesis trata sobre la detección automática de singularidades en señales, tanto si se extraen de imágenes fotográ cas como si se capturan de sensores electrónicos, así como su posible aplicación en la detección de estructuras morfológicas en otolitos de peces para identi car especies, y realizar una estimación de la edad en el momento de su muerte. Desde una vertiente más biológica, los otolitos, que son estructuras calcáreas alojadas en el sistema auditivo de todos los peces teleósteos, constituyen uno de los elementos principales en el estudio y la gestión de la ecología marina. En este sentido, el uso combinado de descriptores de Fourier y el análisis de componentes es el primer paso y la clave para caracterizar su morfología e identi car especies marinas. Sin embargo, una de las limitaciones principales de este sistema de representación subyace en la interpretación limitada que se puede obtener de las irregularidades, así como el uso que se hace de los coe cientes en tareas de clasi cación que, por lo general, acostumbra a seleccionarse manualmente tanto por lo que respecta a la cantidad y a su importancia. La detección automática de irregularidades en señales, y su interpretación, se abordó por primera bajo el marco del Best-Basis paradigm. En este sentido, el algoritmo Local Discriminant Bases (LDB) de N. Saito utiliza la Transformada Wavelet Discreta (DWT) para describir el posicionamiento de características en el espacio tiempo-frecuencia, y una medida discriminante basada en la energía para guiar la búsqueda automática de características en dicho dominio. Propuestas recientes basadas en funciones de densidad han tratado de superar las limitaciones que presentaban las medidas de energía con un éxito relativo. No obstante, todavía están por desarrollar nuevas estrategias más consistentes con la capacidad real de clasi cación y que ofrezcan mayor generalización al reducir la dimensión de los datos de entrada. La propuesta de este trabajo se centra en un nuevo marco para señales unidimensionales. Una conclusión principal que se extrae es que dicha generalización pasa por un marco de medidas de valores acotados que re ejen la densidad donde las clases no se solapan. Esto condiciona severamente el proceso de selección de características y el tamaño del vector necesario para identi car las clases correctamente, que se ha de establecer no sólo en base a valores discriminantes globales sino también en la información complementaria sobre la disposición de las muestras en el dominio. Las nuevas herramientas han sido utilizadas en el estudio biológico de diferentes especies de merluza, donde se han conseguido buenos resultados de identi cación. No obstante, la contribución principal subyace en la interpretación que dicha herramienta hace de las características seleccionadas, y que incluye la estructura de las irregularidades, su posición temporal-frecuencial, extensión en el eje y grado de relevancia, el cual, se resalta automáticamente sobre la misma imagen o señal. Por lo que respecta a la determinación de la edad, se ha planteado una nueva estrategia de demodulación para compensar el efecto del crecimiento no lineal en los per les de intensidad. Inicialmente, aunque el método implementa un proceso de optimización capaz de adaptarse al crecimiento individual de cada pez automáticamente, resultados preliminares obtenidos con técnicas basadas en el LDB sugieren estudiar el efecto de las condiciones lumínicas sobre los otolitos con el n de diseñar algoritmos que reduzcan la variación del contraste de la imagen más ablemente. Mientras tanto, se ha planteado una nueva teoría para estimar la edad de los peces en base a otolitos. Esta teoría sugiere que si la curva de crecimiento real del pez se conoce, el período regular de los anillos en el per l demodulado está relacionado con la longitud total del radio donde se extrae el per l original. Por tanto, si dicha periodicidad es medible, es posible determinar la edad exacta sin necesidad de utilizar extractores de características o clasi cadores, lo cual tendría implicaciones importantes en el uso de recursos computacionales y en las técnicas actuales de estimación de la edad.L'eix principal d'aquesta tesi tracta sobre la detecció automàtica d'irregularitats en senyals, tant si s'extreuen de les imatges fotogrà ques com si es capturen de sensors electrònics, així com la seva possible aplicació en la detecció d'estructures morfològiques en otòlits de peixos per identi car espècies, i realitzar una estimació de l'edat en el moment de la seva mort. Des de la vesant més biològica, els otòlits, que son estructures calcàries que es troben en el sistema auditiu de tots els peixos teleostis, constitueixen un dels elements principals en l'estudi i la gestió de l'ecologia marina. En aquest sentit, l'ús combinat de descriptors de Fourier i l'anàlisi de components es el primer pas i la clau per caracteritzar la seva morfologia i identi car espècies marines. No obstant, una de les limitacions principals d'aquest sistema de representació consisteix en la interpretació limitada de les irregularitats que pot desenvolupar, així com l'ús que es realitza dels coe cients en tasques de classi cació, els quals, acostumen a ser seleccionats manualment tant pel que respecta a la quantitat com la seva importància. La detecció automàtica d'irregularitats en senyals, així com la seva interpretació, es va tractar per primera vegada sota el marc del Best-Basis paradigm. En aquest sentit, l'algorisme Local Discriminant Bases (LDB) de N. Saito es basa en la Transformada Wavelet Discreta (DWT) per descriure el posicionament de característiques dintre de l'espai temporal-freqüencial, i en una mesura discriminant basada en l'energia per guiar la cerca automàtica de característiques dintre d'aquest domini. Propostes més recents basades en funcions de densitat han tractat de superar les limitacions de les mesures d'energia amb un èxit relatiu. No obstant, encara s'han de desenvolupar noves estratègies que siguin més consistents amb la capacitat real de classi cació i ofereixin més generalització al reduir la dimensió de les dades d'entrada. La proposta d'aquest treball es centra en un nou marc per senyals unidimensionals. Una de las conclusions principals que s'extreu es que aquesta generalització passa per establir un marc de mesures acotades on els valors re ecteixin la densitat on cap classe es solapa. Això condiciona bastant el procés de selecció de característiques i la mida del vector necessari per identi car les classes correctament, que s'han d'establir no només en base a valors discriminants globals si no també en informació complementària sobre la disposició de les mostres en el domini. Les noves eines s'han utilitzat en diferents estudis d'espècies de lluç, on s'han obtingut bons resultats d'identi cació. No obstant, l'aportació principal consisteix en la interpretació que l'eina extreu de les característiques seleccionades, i que inclou l'estructura de les irregularitats, la seva posició temporal-freqüencial, extensió en l'eix i grau de rellevància, el qual, es ressalta automàticament sobre les mateixa imatge o senyal. En quan a l'àmbit de determinació de l'edat, s'ha plantejat una nova estratègia de demodulació de senyals per compensar l'efecte del creixement no lineal en els per ls d'intensitat. Tot i que inicialment aquesta tècnica desenvolupa un procés d'optimització capaç d'adaptar-se automàticament al creixement individual de cada peix, els resultats amb el LDB suggereixen estudiar l'efecte de les condicions lumíniques sobre els otòlits amb la nalitat de dissenyar algorismes que redueixin la variació del contrast de les imatges més ablement. Mentrestant s'ha plantejat una nova teoria per realitzar estimacions d'edat en peixos en base als otòlits. Aquesta teoria suggereix que si la corba de creixement és coneguda, el període regular dels anells en el per l d'intensitat demodulat està relacionat amb la longitud total de radi d'on s'agafa el per l original. Per tant, si la periodicitat es pot mesurar, es possible conèixer l'edat exacta del peix sense usar extractors de característiques o classi cadors, la qual cosa tindria implicacions importants en l'ús de recursos computacionals i en les tècniques actuals d'estimació de l'edat.Postprint (published version

    Image similarity in medical images

    Get PDF
    corecore