2,425 research outputs found

    Hybrid Swarm Intelligence Method for Post Clustering Content Based Image Retrieval

    Get PDF
    AbstractContent Based Image Retrieval is one of the most promising method for image retrieval where searching and retrieving images from large scale image database is a critical task. In Content Based Image Retrieval many visual feature like color, shape, and texture are extracted in order to match query image with stored database images. Matching the query image with each image of large scale database results in large number of disc scans which in turns slows down the systems performance.The proposed work suggested an approach for post clustering Content Based Image Retrieval, in which the database images are clustered into optimized clusters for further retrieval process. Various clustering algorithms are implemented and results are compared. Among all, it is found that hybrid ACPSO algorithm performs better over basic algorithms like k-means, ACO, PSO etc. Hybrid ACPSO has the capability to produce good cluster initialization and form global clustering.This paper discusses work-in-progress where we have implemented till clustering module and intermediate results are produced. These resulted clusters will further be used for effective Content Based Image Retrieval

    Data fusion by using machine learning and computational intelligence techniques for medical image analysis and classification

    Get PDF
    Data fusion is the process of integrating information from multiple sources to produce specific, comprehensive, unified data about an entity. Data fusion is categorized as low level, feature level and decision level. This research is focused on both investigating and developing feature- and decision-level data fusion for automated image analysis and classification. The common procedure for solving these problems can be described as: 1) process image for region of interest\u27 detection, 2) extract features from the region of interest and 3) create learning model based on the feature data. Image processing techniques were performed using edge detection, a histogram threshold and a color drop algorithm to determine the region of interest. The extracted features were low-level features, including textual, color and symmetrical features. For image analysis and classification, feature- and decision-level data fusion techniques are investigated for model learning using and integrating computational intelligence and machine learning techniques. These techniques include artificial neural networks, evolutionary algorithms, particle swarm optimization, decision tree, clustering algorithms, fuzzy logic inference, and voting algorithms. This work presents both the investigation and development of data fusion techniques for the application areas of dermoscopy skin lesion discrimination, content-based image retrieval, and graphic image type classification --Abstract, page v

    Multi-layer Architecture For Storing Visual Data Based on WCF and Microsoft SQL Server Database

    Full text link
    In this paper we present a novel architecture for storing visual data. Effective storing, browsing and searching collections of images is one of the most important challenges of computer science. The design of architecture for storing such data requires a set of tools and frameworks such as SQL database management systems and service-oriented frameworks. The proposed solution is based on a multi-layer architecture, which allows to replace any component without recompilation of other components. The approach contains five components, i.e. Model, Base Engine, Concrete Engine, CBIR service and Presentation. They were based on two well-known design patterns: Dependency Injection and Inverse of Control. For experimental purposes we implemented the SURF local interest point detector as a feature extractor and KK-means clustering as indexer. The presented architecture is intended for content-based retrieval systems simulation purposes as well as for real-world CBIR tasks.Comment: Accepted for the 14th International Conference on Artificial Intelligence and Soft Computing, ICAISC, June 14-18, 2015, Zakopane, Polan

    Image retrieval based on colour and improved NMI texture features

    Get PDF
    This paper proposes an improved method for extracting NMI features. This method uses Particle Swarm Optimization in advance to optimize the two-dimensional maximum class-to-class variance (2OTSU) in advance. Afterwards, the optimized 2OUSU is introduced into the Pulse Coupled Neural Network (PCNN) to automatically obtain the number of iterations of the loop. We use an improved PCNN method to extract the NMI features of the image. For the problem of low accuracy of single feature, this paper proposes a new method of multi-feature fusion based on image retrieval. It uses HSV colour features and texture features, where, the texture feature extraction methods include: Grey Level Co-occurrence Matrix (GLCM), Local Binary Pattern (LBP) and Improved PCNN. The experimental results show that: on the Corel-1k dataset, compared with similar algorithms, the retrieval accuracy of this method is improved by 13.6%; On the AT&T dataset, the retrieval accuracy is improved by 13.4% compared with the similar algorithm; on the FD-XJ dataset, the retrieval accuracy is improved by 17.7% compared with the similar algorithm. Therefore, the proposed algorithm has better retrieval performance and robustness compared with the existing image retrieval algorithms based on multi-feature fusion

    Reverse Engineering of Mechanical Parts: a Template-Based Approach

    Get PDF
    Abstract Template-Based reverse engineering approaches represent a relatively poorly explored strategy in the field of CAD reconstruction from polygonal models. Inspired by recent works suggesting the possibility/opportunity of exploiting a parametric description (i.e. CAD template) of the object to be reconstructed in order to retrieve a meaningful digital representation, a novel reverse engineering approach for the reconstruction of CAD models starting from 3D mesh data is proposed. The reconstruction process is performed relying on a CAD template, whose feature tree and geometric constraints are defined according to the a priori information on the physical object. The CAD template is fitted upon the mesh data, optimizing its dimensional parameters and positioning/orientation by means of a particle swarm optimization algorithm. As a result, a parametric CAD model that perfectly fulfils the imposed geometric relations is produced and a feature tree, defining an associative modelling history, is available to the reverse engineer. The proposed implementation exploits a cooperation between a CAD software package (Siemens NX) and a numerical software environment (MATLAB). Five reconstruction tests, covering both synthetic and real-scanned mesh data, are presented and discussed in the manuscript; the results are finally compared with models generated by state of the art reverse engineering software and key aspects to be addressed in future work are hinted at. Highlights A novel CAD reconstruction method fitting a CAD template model to mesh data. A feature-based parametric-associative modelling history is retrieved. Fitting process is controlled by a Particle Swarm Optimization algorithm. Accuracy of reconstructed models is comparable/better than state of the art results. Computational costs and required time are at the moment considerable

    HSO: A Hybrid Swarm Optimization Algorithm for Re-Ducing Energy Consumption in the Cloudlets

    Get PDF
    Mobile Cloud Computing (MCC) is an emerging technology for the improvement of mobile service quality. MCC resources are dynamically allocated to the users who pay for the resources based on their needs. The drawback of this process is that it is prone to failure and demands a high energy input. Resource providers mainly focus on resource performance and utilization with more consideration on the constraints of service level agreement (SLA). Resource performance can be achieved through virtualization techniques which facilitates the sharing of resource providers’ information between different virtual machines. To address these issues, this study sets forth a novel algorithm (HSO) that optimized energy efficiency resource management in the cloud; the process of the proposed method involves the use of the developed cost and runtime-effective model to create a minimum energy configuration of the cloud compute nodes while guaranteeing the maintenance of all minimum performances. The cost functions will cover energy, performance and reliability concerns. With the proposed model, the performance of the Hybrid swarm algorithm was significantly increased, as observed by optimizing the number of tasks through simulation, (power consumption was reduced by 42%). The simulation studies also showed a reduction in the number of required calculations by about 20% by the inclusion of the presented algorithms compared to the traditional static approach. There was also a decrease in the node loss which allowed the optimization algorithm to achieve a minimal overhead on cloud compute resources while still saving energy significantly. Conclusively, an energy-aware optimization model which describes the required system constraints was presented in this study, and a further proposal for techniques to determine the best overall solution was also made
    corecore