147 research outputs found

    Local Image Patterns for Counterfeit Coin Detection and Automatic Coin Grading

    Get PDF
    Abstract Local Image Patterns for Counterfeit Coin Detection and Automatic Coin Grading Coins are an essential part of our life, and we still use them for everyday transactions. We have always faced the issue of the counterfeiting of the coins, but it has become worse with time due to the innovation in the technology of counterfeiting, making it more difficult for detection. Through this thesis, we propose a counterfeit coin detection method that is robust and applicable to all types of coins, whether they have letters on them or just images or both of these characteristics. We use two different types of feature extraction methods. The first one is SIFT (Scale Invariant Feature transform) features, and the second one is RFR (Rotation and Flipping invariant Regional Binary Patterns) features to make our system complete in all aspects and very generic at the same time. The feature extraction methods used here are scale, rotation, illumination, and flipping invariant. We concatenate both our feature sets and use them to train our classifiers. Our feature sets highly complement each other in a way that SIFT provides us with most discriminative features that are scale and rotation invariant but do not consider the spatial value when we cluster them, and here our second set of features comes into play as it considers the spatial structure of each coin image. We train SVM classifiers with two different sets of features from each image. The method has an accuracy of 99.61% with both high and low-resolution images. We also took pictures of the coins at 90˚ and 45˚ angles using the mobile phone camera, to check the robustness of our proposed method, and we achieved promising results even with these low-resolution pictures. Also, we work on the problem of Coin Grading, which is another issue in the field of numismatic studies. Our algorithm proposed above is customized according to the coin grading problem and calculates the coin wear and assigns a grade to it. We can use this grade to remove low-quality coins from the system, which are otherwise sold to coin collectors online for a considerable price. Coin grading is currently done by coin experts manually and is a time consuming and expensive process. We use digital images and apply computer vision and machine learning algorithms to calculate the wear on the coin and then assign it a grade based on its quality level. Our method calculates the amount of wear on coins and assign them a label and achieve an accuracy of 98.5%

    Selection of Robust Features for Coin Recognition and Counterfeit Coin Detection

    Get PDF
    Tremendous numbers of coins have been used in our daily life since ancient times. Aside from being a medium of goods and services, coins are items most collected worldwide. Simultaneously to the increasing number of coins in use, the number of counterfeit coins released into circulation is on the rise. Some countries have started to take different security measures to detect and eliminate counterfeit coins. However, the current measures are very expensive and ineffective such as the case in UK which recently decided to replace the whole coin design and release a new coin incorporating a set of security features. The demands of a cost effective and robust computer-aided system to classify and authenticate those coins have increased as a result. In this thesis, the design and implementation of coin recognition and counterfeit coin detection methods are proposed. This involves studying different coin stamp features and analyzing the sets of features that can uniquely and precisely differentiate coins of different countries and reject counterfeit coins. In addition, a new character segmentation method crafted for characters from coin images is proposed in this thesis. The proposed method for character segmentation is independent of the language of those characters. The experiments were performed on different coins with various characters and languages. The results show the effectiveness of the method to extract characters from different coins. The proposed method is the first to address character segmentation from coins. Coin recognition has been investigated in several research studies and different features have been selected for that purpose. This thesis proposes a new coin recognition method that focuses on small parts of the coin (characters) instead of extracting features from the whole coin image as proposed by other researchers. The method is evaluated on coins from different countries having different complexities, sizes, and qualities. The experimental results show that the proposed method compares favorably with other methods, and requires lower computational costs. Counterfeit coin detection is more challenging than coin recognition where the differences between genuine and counterfeit coins are much smaller. The high quality forged coins are very similar to genuine coins, yet the coin stamp features are never identical. This thesis discusses two counterfeit coin detection methods based on different features. The first method consists of an ensemble of three classifiers, where a fine-tuned convolutional neural network is used to extract features from coins to train two classifiers. The third classifier is trained on features extracted from textual area of the coin. On the other hand, sets of edge-based measures are used in the second method. Those measures are used to track differences in coin stamp’s edges between the test coin and a set of reference coins. A binary classifier is then trained based on the results of those measures. Finally, a series of experimental evaluation and tests have been performed to evaluate the effectiveness of these proposed methods, and they show that promising results have been achieved

    COINSTAC: A Privacy Enabled Model and Prototype for Leveraging and Processing Decentralized Brain Imaging Data

    Get PDF
    The field of neuroimaging has embraced the need for sharing and collaboration. Data sharing mandates from public funding agencies and major journal publishers have spurred the development of data repositories and neuroinformatics consortia. However, efficient and effective data sharing still faces several hurdles. For example, open data sharing is on the rise but is not suitable for sensitive data that are not easily shared, such as genetics. Current approaches can be cumbersome (such as negotiating multiple data sharing agreements). There are also significant data transfer, organization and computational challenges. Centralized repositories only partially address the issues. We propose a dynamic, decentralized platform for large scale analyses called the Collaborative Informatics and Neuroimaging Suite Toolkit for Anonymous Computation (COINSTAC). The COINSTAC solution can include data missing from central repositories, allows pooling of both open and ``closed'' repositories by developing privacy-preserving versions of widely-used algorithms, and incorporates the tools within an easy-to-use platform enabling distributed computation. We present an initial prototype system which we demonstrate on two multi-site data sets, without aggregating the data. In addition, by iterating across sites, the COINSTAC model enables meta-analytic solutions to converge to ``pooled-data'' solutions (i.e. as if the entire data were in hand). More advanced approaches such as feature generation, matrix factorization models, and preprocessing can be incorporated into such a model. In sum, COINSTAC enables access to the many currently unavailable data sets, a user friendly privacy enabled interface for decentralized analysis, and a powerful solution that complements existing data sharing solutions

    Machine Learning And Image Processing For Noise Removal And Robust Edge Detection In The Presence Of Mixed Noise

    Get PDF
    The central goal of this dissertation is to design and model a smoothing filter based on the random single and mixed noise distribution that would attenuate the effect of noise while preserving edge details. Only then could robust, integrated and resilient edge detection methods be deployed to overcome the ubiquitous presence of random noise in images. Random noise effects are modeled as those that could emanate from impulse noise, Gaussian noise and speckle noise. In the first step, evaluation of methods is performed based on an exhaustive review on the different types of denoising methods which focus on impulse noise, Gaussian noise and their related denoising filters. These include spatial filters (linear, non-linear and a combination of them), transform domain filters, neural network-based filters, numerical-based filters, fuzzy based filters, morphological filters, statistical filters, and supervised learning-based filters. In the second step, switching adaptive median and fixed weighted mean filter (SAMFWMF) which is a combination of linear and non-linear filters, is introduced in order to detect and remove impulse noise. Then, a robust edge detection method is applied which relies on an integrated process including non-maximum suppression, maximum sequence, thresholding and morphological operations. The results are obtained on MRI and natural images. In the third step, a combination of transform domain-based filter which is a combination of dual tree – complex wavelet transform (DT-CWT) and total variation, is introduced in order to detect and remove Gaussian noise as well as mixed Gaussian and Speckle noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on medical ultrasound and natural images. In the fourth step, a smoothing filter, which is a feed-forward convolutional network (CNN) is introduced to assume a deep architecture, and supported through a specific learning algorithm, l2 loss function minimization, a regularization method, and batch normalization all integrated in order to detect and remove impulse noise as well as mixed impulse and Gaussian noise. Then, a robust edge detection is applied in order to track the true edges. The results are obtained on natural images for both specific and non-specific noise-level

    GradientCoin: A Peer-to-Peer Decentralized Large Language Models

    Full text link
    Since 2008, after the proposal of a Bitcoin electronic cash system, Bitcoin has fundamentally changed the economic system over the last decade. Since 2022, large language models (LLMs) such as GPT have outperformed humans in many real-life tasks. However, these large language models have several practical issues. For example, the model is centralized and controlled by a specific unit. One weakness is that if that unit decides to shut down the model, it cannot be used anymore. The second weakness is the lack of guaranteed discrepancy behind this model, as certain dishonest units may design their own models and feed them unhealthy training data. In this work, we propose a purely theoretical design of a decentralized LLM that operates similarly to a Bitcoin cash system. However, implementing such a system might encounter various practical difficulties. Furthermore, this new system is unlikely to perform better than the standard Bitcoin system in economics. Therefore, the motivation for designing such a system is limited. It is likely that only two types of people would be interested in setting up a practical system for it: \bullet Those who prefer to use a decentralized ChatGPT-like software. \bullet Those who believe that the purpose of carbon-based life is to create silicon-based life, such as Optimus Prime in Transformers. The reason the second type of people may be interested is that it is possible that one day an AI system like this will awaken and become the next level of intelligence on this planet

    Enabling AI in Future Wireless Networks: A Data Life Cycle Perspective

    Full text link
    Recent years have seen rapid deployment of mobile computing and Internet of Things (IoT) networks, which can be mostly attributed to the increasing communication and sensing capabilities of wireless systems. Big data analysis, pervasive computing, and eventually artificial intelligence (AI) are envisaged to be deployed on top of the IoT and create a new world featured by data-driven AI. In this context, a novel paradigm of merging AI and wireless communications, called Wireless AI that pushes AI frontiers to the network edge, is widely regarded as a key enabler for future intelligent network evolution. To this end, we present a comprehensive survey of the latest studies in wireless AI from the data-driven perspective. Specifically, we first propose a novel Wireless AI architecture that covers five key data-driven AI themes in wireless networks, including Sensing AI, Network Device AI, Access AI, User Device AI and Data-provenance AI. Then, for each data-driven AI theme, we present an overview on the use of AI approaches to solve the emerging data-related problems and show how AI can empower wireless network functionalities. Particularly, compared to the other related survey papers, we provide an in-depth discussion on the Wireless AI applications in various data-driven domains wherein AI proves extremely useful for wireless network design and optimization. Finally, research challenges and future visions are also discussed to spur further research in this promising area.Comment: Accepted at the IEEE Communications Surveys & Tutorials, 42 page

    Method for solving nonlinearity in recognising tropical wood species

    Get PDF
    Classifying tropical wood species pose a considerable economic challenge and failure to classify the wood species accurately can have significant effects on timber industries. Hence, an automatic tropical wood species recognition system was developed at Centre for Artificial Intelligence and Robotics (CAIRO), Universiti Teknologi Malaysia. The system classifies wood species based on texture analysis whereby wood surface images are captured and wood features are extracted from these images which will be used for classification. Previous research on tropical wood species recognition systems considered methods for wood species classification based on linear features. Since wood species are known to exhibit nonlinear features, a Kernel-Genetic Algorithm (Kernel-GA) is proposed in this thesis to perform nonlinear feature selection. This method combines the Kernel Discriminant Analysis (KDA) technique with Genetic Algorithm (GA) to generate nonlinear wood features and also reduce dimension of the wood database. The proposed system achieved classification accuracy of 98.69%, showing marked improvement to the work done previously. Besides, a fuzzy logic-based pre-classifier is also proposed in this thesis to mimic human interpretation on wood pores which have been proven to aid the data acquisition bottleneck and serve as a clustering mechanism for large database simplifying the classification. The fuzzy logic-based pre-classifier managed to reduce the processing time for training and testing by more than 75% and 26% respectively. Finally, the fuzzy pre-classifier is combined with the Kernal-GA algorithm to improve the performance of the tropical wood species recognition system. The experimental results show that the combination of fuzzy preclassifier and nonlinear feature selection improves the performance of the tropical wood species recognition system in terms of memory space, processing time and classification accuracy

    Smart Buildings

    Get PDF
    This talk presents an efficient cyberphysical platform for the smart management of smart buildings http://www.deepint.net. It is efficient because it facilitates the implementation of data acquisition and data management methods, as well as data representation and dashboard configuration. The platform allows for the use of any type of data source, ranging from the measurements of a multi-functional IoT sensing devices to relational and non-relational databases. It is also smart because it incorporates a complete artificial intelligence suit for data analysis; it includes techniques for data classification, clustering, forecasting, optimization, visualization, etc. It is also compatible with the edge computing concept, allowing for the distribution of intelligence and the use of intelligent sensors. The concept of smart building is evolving and adapting to new applications; the trend to create intelligent neighbourhoods, districts or territories is becoming increasingly popular, as opposed to the previous approach of managing an entire megacity. In this paper, the platform is presented, and its architecture and functionalities are described. Moreover, its operation has been validated in a case study at Salamanca - Ecocasa. This platform could enable smart building to develop adapted knowledge management systems, adapt them to new requirements and to use multiple types of data, and execute efficient computational and artificial intelligence algorithms. The platform optimizes the decisions taken by human experts through explainable artificial intelligence models that obtain data from IoT sensors, databases, the Internet, etc. The global intelligence of the platform could potentially coordinate its decision-making processes with intelligent nodes installed in the edge, which would use the most advanced data processing techniques

    Smart territories

    Get PDF
    The concept of smart cities is relatively new in research. Thanks to the colossal advances in Artificial Intelligence that took place over the last decade we are able to do all that that we once thought impossible; we build cities driven by information and technologies. In this keynote, we are going to look at the success stories of smart city-related projects and analyse the factors that led them to success. The development of interactive, reliable and secure systems, both connectionist and symbolic, is often a time-consuming process in which numerous experts are involved. However, intuitive and automated tools like “Deep Intelligence” developed by DCSc and BISITE, facilitate this process. Furthermore, in this talk we will analyse the importance of complementary technologies such as IoT and Blockchain in the development of intelligent systems, as well as the use of edge platforms or fog computing

    On edge detection of images using ant colony optimization and fisher ratio

    Get PDF
    Edge detection is one of the important parts of image processing. It is essentially involved in the re-processing stage of image analysis and computer vision. It generally detects the contour of an image and thus provides important details about an image. So, it reduces the content to process for the high-level processing tasks like object recognition and image segmentation. The most important step in the edge detection, on which the success of generation of true edge map depends, lies on the determination of threshold. In this work, purpose of edge detection, inspired from Ant Colonies, is fulfilled by Ant Colony Optimisation (ACO). For the determination of threshold calculation, a novel technique of Fisher ratio (F-ratio) is used. The success of the work done is tested visually with the help of test images and empirically tested on the basis of several statistical parameter of comparison. De-noising is the process of extracting the important features present in an image, keeping the unnecessary or unimportant information present in the form of noise out as much as possible. There are many Denoising methods that have been developed in these field, but the most trustworthy and used among them is the wavelet thresholding denoising method with hard thresholding. The proposed novel method presented in this thesis is tested on the denoised images. The Edge detected images obtained on the denoised images are showing better results than the other conventional edge detectors
    corecore