1,512 research outputs found

    A PCNN Framework for Blood Cell Image Segmentation

    Get PDF
    This research presents novel methods for segmenting digital blood cell images under a Pulse Coupled Neural Network (PCNN) framework. A blood cell image contains different types of blood cells found in the peripheral blood stream such as red blood cells (RBCs), white blood cells (WBCs), and platelets. WBCs can be classified into five normal types – neutrophil, monocyte, lymphocyte, eosinophil, and basophil – as well as abnormal types such as lymphoblasts and others. The focus of this research is on identifying and counting RBCs, normal types of WBCs, and lymphoblasts. The total number of RBCs and WBCs, along with classification of WBCs, has important medical significance which includes providing a physician with valuable information for diagnosis of diseases such as leukemia. The approach comprises two phases – segmentation and cell separation – followed by classification of WBC types including detection of lymphoblasts. The first phase presents two methods based on PCNN and region growing to segment followed by a separate method that combines Circular Hough Transform (CHT) with a separation algorithm to find and separate each RBC and WBC object into separate images. The first method uses a standard PCNN to segment. The second method uses a region growing PCNN with a maximum region size to segment. The second phase presents a WBC classification method based on PCNN. It uses a PCNN to capture the texture features of an image as a sequence of entropy values known as a texture vector. First, the parameters of the texture vector PCNN are defined. This is then used to produce texture vectors for the training images. Each cell type is represented by several texture vectors across its instances. Then, given a test image to be classified, the texture vector PCNN is used to capture its texture vector, which is compared to the texture vectors for classification. This two-phase approach yields metrics based on the RBC and WBC counts, WBC classification, and identification of lymphoblasts. Both the standard and region growing PCNNs were successful in segmenting RBC and WBC objects, with better accuracy when using the standard PCNN. The separate method introduced with this research provided accurate WBC counts but less accurate RBC counts. The WBC subimages created with the separate method facilitated cell counting and WBC classification. Using a standard PCNN as a WBC classifier, introduced with this research, proved to be a successful classifier and lymphoblast detector. While RBC accuracy was low, WBC accuracy for total counts, WBC classification, and lymphoblast detection were overall above 96%

    Advanced Computational Methods for Oncological Image Analysis

    Get PDF
    [Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.

    Yüksek uzaysal çözünürlüklü uydu görüntülerinin watershed kullanılarak çok ölçekli otomatik bölütlenmesi.

    Get PDF
    Useful information extraction from satellite images for the use of other higher level applications such as road network extraction and update, city planning etc. is a very important and active research area. It is seen that pixel-based techniques becomes insufficient for this task with increasing spatial resolution of satellite imaging sensors day by day. Therefore, the use of object-based techniques becomes indispensable and the segmentation method selection is very crucial for object-based techniques. In this thesis, various segmentation algorithms applied in remote sensing literature are presented and a segmentation process that is based on watersheds and multi-scale segmentation is proposed to use as the segmentation step of an object-based classifier. For every step of the proposed segmentation process, qualitative and quantitative comparisons with alternative approaches are done. The ones which provide best performance are incorporated into the proposed algorithm. Also, an unsupervised segmentation accuracy metric to determine all parameters of the algorithm is proposed. By this way, the proposed segmentation algorithm has become a fully automatic approach. Experiments that are done on a database formed with images taken from Google Earth® software provide promising results.M.S. - Master of Scienc

    Geostatistical and statistical classification of sea-ice properties and provinces from SAR data

    Get PDF
    Recent drastic reductions in the Arctic sea-ice cover have raised an interest in understanding the role of sea ice in the global system as well as pointed out a need to understand the physical processes that lead to such changes. Satellite remote-sensing data provide important information about remote ice areas, and Synthetic Aperture Radar (SAR) data have the advantages of penetration of the omnipresent cloud cover and of high spatial resolution. A challenge addressed in this paper is how to extract information on sea-ice types and sea-ice processes from SAR data. We introduce, validate and apply geostatistical and statistical approaches to automated classification of sea ice from SAR data, to be used as individual tools for mapping sea-ice properties and provinces or in combination. A key concept of the geostatistical classification method is the analysis of spatial surface structures and their anisotropies, more generally, of spatial surface roughness, at variable, intermediate-sized scales. The geostatistical approach utilizes vario parameters extracted from directional vario functions, the parameters can be mapped or combined into feature vectors for classification. The method is flexible with respect to window sizes and parameter types and detects anisotropies. In two applications to RADARSAT and ERS-2 SAR data from the area near Point Barrow, Alaska, it is demonstrated that vario-parameter maps may be utilized to distinguish regions of different sea-ice characteristics in the Beaufort Sea, the Chukchi Sea and in Elson Lagoon. In a third and a fourth case study the analysis is taken further by utilizing multi-parameter feature vectors as inputs for unsupervised and supervised statistical classification. Field measurements and high-resolution aerial observations serve as basis for validation of the geostatistical-statistical classification methods. A combination of supervised classification and vario-parameter mapping yields best results, correctly identifying several sea-ice provinces in the shore-fast ice and the pack ice. Notably, sea ice does not have to be static to be classifiable with respect to spatial structures. In consequence, the geostatistical-statistical classification may be applied to detect changes in ice dynamics, kinematics or environmental changes, such as increased melt ponding, increased snowfall or changes in the equilibrium line

    Towards Automated Analysis of Urban Infrastructure after Natural Disasters using Remote Sensing

    Get PDF
    Natural disasters, such as earthquakes and hurricanes, are an unpreventable component of the complex and changing environment we live in. Continued research and advancement in disaster mitigation through prediction of and preparation for impacts have undoubtedly saved many lives and prevented significant amounts of damage, but it is inevitable that some events will cause destruction and loss of life due to their sheer magnitude and proximity to built-up areas. Consequently, development of effective and efficient disaster response methodologies is a research topic of great interest. A successful emergency response is dependent on a comprehensive understanding of the scenario at hand. It is crucial to assess the state of the infrastructure and transportation network, so that resources can be allocated efficiently. Obstructions to the roadways are one of the biggest inhibitors to effective emergency response. To this end, airborne and satellite remote sensing platforms have been used extensively to collect overhead imagery and other types of data in the event of a natural disaster. The ability of these platforms to rapidly probe large areas is ideal in a situation where a timely response could result in saving lives. Typically, imagery is delivered to emergency management officials who then visually inspect it to determine where roads are obstructed and buildings have collapsed. Manual interpretation of imagery is a slow process and is limited by the quality of the imagery and what the human eye can perceive. In order to overcome the time and resource limitations of manual interpretation, this dissertation inves- tigated the feasibility of performing fully automated post-disaster analysis of roadways and buildings using airborne remote sensing data. First, a novel algorithm for detecting roadway debris piles from airborne light detection and ranging (lidar) point clouds and estimating their volumes is presented. Next, a method for detecting roadway flooding in aerial imagery and estimating the depth of the water using digital elevation models (DEMs) is introduced. Finally, a technique for assessing building damage from airborne lidar point clouds is presented. All three methods are demonstrated using remotely sensed data that were collected in the wake of recent natural disasters. The research presented in this dissertation builds a case for the use of automatic, algorithmic analysis of road networks and buildings after a disaster. By reducing the latency between the disaster and the delivery of damage maps needed to make executive decisions about resource allocation and performing search and rescue missions, significant loss reductions could be achieved

    Novel neural network-based algorithms for urban classification and change detection from satellite imagery

    Get PDF
    L`attivitĂ  umana sta cambiando radicalmente l`ecosistema ambientale, unito anche alla rapida espansione demografica dei sistemi urbani. Benche` queste aree rappresentano solo una minima frazione della Terra, il loro impatto sulla richiesta di energia, cibo, acqua e materiali primi, e` enorme. Per cui, una informazione accurata e tempestiva risulta essere essenziale per gli enti di protezione civile in caso, ad esempio, di catastrofi ambientali. Negli ultimi anni il forte sviluppo di sistemi satellitari, sia dal punto di vista della risoluzione spaziale che di quella radiometrica e temporale, ha permesso una sempre piu` accurato monitoraggio della Terra, sia con sistemi ottici che con quelli RADAR. Ad ogni modo, una piu` alta risoluzione (sia spaziale, che spettrale o temporale) presenta tanti vantaggi e miglioramenti quanti svantaggi e limitazioni. In questa tesi sono discussi in dettaglio i diversi aspetti e tecniche per la classificazione e monitoraggio dei cambiamenti di aree urbane, utilizzando sia sistemi ottici che RADAR. Particolare enfasi e` data alla teoria ed all`uso di reti neurali.Human activity dominates the Earth's ecosystems with structural modifications. The rapid population growth over recent decades and the concentration of this population in and around urban areas have significantly impacted the environment. Although urban areas represent a small fraction of the land surface, they affect large areas due to the magnitude of the associated energy, food, water, and raw material demands. Reliable information in populated areas is essential for urban planning and strategic decision making, such as civil protection departments in cases of emergency. Remote sensing is increasingly being used as a timely and cost-effective source of information in a wide number of applications, from environment monitoring to location-aware systems. However, mapping human settlements represents one of the most challenging areas for the remote sensing community due to its high spatial and spectral diversity. From the physical composition point of view, several different materials can be used for the same man-made element (for example, building roofs can be made of clay tiles, metal, asphalt, concrete, plastic, grass or stones). On the other hand, the same material can be used for different purposes (for example, concrete can be found in paved roads or building roofs). Moreover, urban areas are often made up of materials present in the surrounding region, making them indistinguishable from the natural or agricultural areas (examples can be unpaved roads and bare soil, clay tiles and bare soil, or parks and vegetated open spaces) [1]. During the last two decades, significant progress has been made in developing and launching satellites with instruments, in both the optical/infrared and microwave regions of the spectra, well suited for Earth observation with an increasingly finer spatial, spectral and temporal resolution. Fine spatial sensors with metric or sub-metric resolution allow the detection of small-scale objects, such as elements of residential housing, commercial buildings, transportation systems and utilities. Multi-spectral and hyper-spectral remote sensing systems provide additional discriminative features for classes that are spectrally similar, due to their higher spectral resolution. The temporal component, integrated with the spectral and spatial dimensions, provides essential information, for example on vegetation dynamics. Moreover, the delineation of temporal homogeneous patches reduces the effect of local spatial heterogeneity that often masks larger spatial patterns. Nevertheless, higher resolution (spatial, spectral or temporal) imagery comes with limits and challenges that equal the advantages and improvements, and this is valid for both optical and synthetic aperture radar data [2]. This thesis addresses the different aspects of mapping and change detection of human settlements, discussing the main issues related to the use of optical and synthetic aperture radar data. Novel approaches and techniques are proposed and critically discussed to cope with the challenges of urban areas, including data fusion, image information mining, and active learning. The chapters are subdivided into three main parts. Part I addresses the theoretical aspects of neural networks, including their different architectures, design, and training. The proposed neural networks-based algorithms, their applications to classification and change detection problems, and the experimental results are described in Part II and Part III

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Fine Art Pattern Extraction and Recognition

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)
    • …
    corecore