13 research outputs found

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required [...

    Offline signatures matching using haar wavelet subbands

    Get PDF
    The complexity of multimedia contents is significantly increasing in the current world. This leads to an exigent demand for developing highly effective systems tosatisfy human needs. Until today, handwritten signature considered an important means that is used in banks and businesses to evidence identity, so there are many works triedto develop a method for recognition purpose. This paper introduced an efficient technique for offline signature recognition depending on extracting the local feature by utilizing the haar wavelet subbands and energy. Three different setsof features are utilized by partitioning the signature image into non overlapping blocks where different block sizes are used. CEDAR signature database is used asa dataset for testing purpose. The results achieved by this technique indicate a high performance in signature recognition

    Entropy in Image Analysis II

    Get PDF
    Image analysis is a fundamental task for any application where extracting information from images is required. The analysis requires highly sophisticated numerical and analytical methods, particularly for those applications in medicine, security, and other fields where the results of the processing consist of data of vital importance. This fact is evident from all the articles composing the Special Issue "Entropy in Image Analysis II", in which the authors used widely tested methods to verify their results. In the process of reading the present volume, the reader will appreciate the richness of their methods and applications, in particular for medical imaging and image security, and a remarkable cross-fertilization among the proposed research areas

    CleanPage: Fast and Clean Document and Whiteboard Capture

    Get PDF
    The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for the high-quality digitization of content from pages and whiteboards to sharable online material. However, capturing this information is not always easy nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop, and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing, and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work, and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use

    CleanPage: Fast and Clean Document and Whiteboard Capture

    Get PDF
    The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for the high-quality digitization of content from pages and whiteboards to sharable online material. However, capturing this information is not always easy nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop, and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing, and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work, and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field

    Система аналізу асиметрії кт-зображень для удосконалення процедур виявлення патології

    Get PDF
    Проблематика. Деменція – це розлад мозку, який впливає на нормальну роботу мозку через втрату нейронів або їх функціональності. Деменція може включати групу симптомів, таких як втрата пам’яті, відсутність міркувань і суджень, проблеми з промовою та розумінням мови, а також зміни в особистості. Загалом у світі 46,8 мільйонів людей мають деменцію, і щороку реєструється приблизно 9,9 мільйонів нових випадків. Частка деменції серед населення віком 60 років і старше становить 7,1%. Мета дослідження. Розробка алгоритму та побудова комп’ютерної системи для автоматичного виявлення та візуалізації асиметрії КТ-зображень шляхом їх порівняння зі своїм дзеркальним відображенням відносно оптимальним чином побудованої осі симетрії. Методика реалізації. Поточне дослідження базується на гіпотезі про те, що асиметрія мозку змінюється в результаті розвитку ранньої та прогресуючої деменції. Оцінка асиметрії в корі головного мозку базується на структурній магнітно-резонансній томографії (МРТ). Це дослідження має на меті дослідити закономірності цих змін за допомогою МРТ і методів комп’ютерного зору. У статті запропоновано алгоритм сегментації та візуалізації відмінностей у симетрії правої та лівої півкуль головного мозку та генерування ознак асиметрії. Результати дослідження. Даний алгоритм допомагає оцінити асиметричні ділянки головного мозку і визначити місце і форму патології. Висновки. Розроблено алгоритм та побудовано комп’ютерну систему для автоматичного виявлення та візуалізації асиметричних ділянок КТ/МРТ/ПЕТ-зображень. Візуалізація полягає у підкресленні кольором відповідних ділянок. В інтерфейсі передбачена можливість гнучких налаштувань чутливості алгоритму до амплітудних та розмірних параметрів несиметричних деталей.Background. Dementia is a brain disorder that affects the normal functioning of the brain due to the loss of neurons or their functionality. Dementia can include a cluster of symptoms such as memory loss, lack of reasoning and judgment, problems with speech and understanding language, and changes in personality. A total of 46.8 million people worldwide have dementia, and approximately 9.9 million new cases are reported each year. The share of dementia among the population aged 60 and older is 7.1%. Objective of study. Development of an algorithm and construction of a computer system for automatic detection and visualization of asymmetry of CT images by comparing them with their mirror image relative to the optimally constructed axis of symmetry. Methods. The current study is based on the hypothesis that brain asymmetry changes as a result of the development of early and progressive dementia. Assessment of asymmetry in the cerebral cortex is based on structural magnetic resonance imaging (MRI). This study aims to investigate the patterns of these changes using MRI and computer vision techniques. The article proposes an algorithm for segmentation and visualization of differences in the symmetry of the right and left hemispheres of the brain and the generation of signs of asymmetry. This algorithm helps to evaluate asymmetric areas of the brain and determine the location and form of the pathology. Conclusions. An algorithm was developed and a computer system was built for automatic detection and visualization of asymmetric areas of CT/MRI/PET images. Visualization consists in highlighting the corresponding areas with color. The interface provides the possibility of flexible settings of the sensitivity of the algorithm to the amplitude and size parameters of non-symmetric parts

    Text Detection in Natural Scenes and Technical Diagrams with Convolutional Feature Learning and Cascaded Classification

    Get PDF
    An enormous amount of digital images are being generated and stored every day. Understanding text in these images is an important challenge with large impacts for academic, industrial and domestic applications. Recent studies address the difficulty of separating text targets from noise and background, all of which vary greatly in natural scenes. To tackle this problem, we develop a text detection system to analyze and utilize visual information in a data driven, automatic and intelligent way. The proposed method incorporates features learned from data, including patch-based coarse-to-fine detection (Text-Conv), connected component extraction using region growing, and graph-based word segmentation (Word-Graph). Text-Conv is a sliding window-based detector, with convolution masks learned using the Convolutional k-means algorithm (Coates et. al, 2011). Unlike convolutional neural networks (CNNs), a single vector/layer of convolution mask responses are used to classify patches. An initial coarse detection considers both local and neighboring patch responses, followed by refinement using varying aspect ratios and rotations for a smaller local detection window. Different levels of visual detail from ground truth are utilized in each step, first using constraints on bounding box intersections, and then a combination of bounding box and pixel intersections. Combining masks from different Convolutional k-means initializations, e.g., seeded using random vectors and then support vectors improves performance. The Word-Graph algorithm uses contextual information to improve word segmentation and prune false character detections based on visual features and spatial context. Our system obtains pixel, character, and word detection f-measures of 93.14%, 90.26%, and 86.77% respectively for the ICDAR 2015 Robust Reading Focused Scene Text dataset, out-performing state-of-the-art systems, and producing highly accurate text detection masks at the pixel level. To investigate the utility of our feature learning approach for other image types, we perform tests on 8- bit greyscale USPTO patent drawing diagram images. An ensemble of Ada-Boost classifiers with different convolutional features (MetaBoost) is used to classify patches as text or background. The Tesseract OCR system is used to recognize characters in detected labels and enhance performance. With appropriate pre-processing and post-processing, f-measures of 82% for part label location, and 73% for valid part label locations and strings are obtained, which are the best obtained to-date for the USPTO patent diagram data set used in our experiments. To sum up, an intelligent refinement of convolutional k-means-based feature learning and novel automatic classification methods are proposed for text detection, which obtain state-of-the-art results without the need for strong prior knowledge. Different ground truth representations along with features including edges, color, shape and spatial relationships are used coherently to improve accuracy. Different variations of feature learning are explored, e.g. support vector-seeded clustering and MetaBoost, with results suggesting that increased diversity in learned features benefit convolution-based text detectors

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject

    Deep learning of brain asymmetry digital biomarkers to support early diagnosis of cognitive decline and dementia

    Get PDF
    Early identification of degenerative processes in the human brain is essential for proper care and treatment. This may involve different instrumental diagnostic methods, including the most popular computer tomography (CT), magnetic resonance imaging (MRI) and positron emission tomography (PET) scans. These technologies provide detailed information about the shape, size, and function of the human brain. Structural and functional cerebral changes can be detected by computational algorithms and used to diagnose dementia and its stages (amnestic early mild cognitive impairment - EMCI, Alzheimer’s Disease - AD). They can help monitor the progress of the disease. Transformation shifts in the degree of asymmetry between the left and right hemispheres illustrate the initialization or development of a pathological process in the brain. In this vein, this study proposes a new digital biomarker for the diagnosis of early dementia based on the detection of image asymmetries and crosssectional comparison of NC (normal cognitively), EMCI and AD subjects. Features of brain asymmetries extracted from MRI of the ADNI and OASIS databases are used to analyze structural brain changes and machine learning classification of the pathology. The experimental part of the study includes results of supervised machine learning algorithms and transfer learning architectures of convolutional neural networks for distinguishing between cognitively normal subjects and patients with early or progressive dementia. The proposed pipeline offers a low-cost imaging biomarker for the classification of dementia. It can be potentially helpful to other brain degenerative disorders accompanied by changes in brain asymmetries
    corecore