22 research outputs found

    Vehicle logo classification using bag of word descriptor and support vector machine classifier

    Get PDF
    Intelligent Transportation Systems play an important role in traffic areas such as to record vehicular traffic data. In order to improve transportation safety and security, a system with the ability to automatically extract and recognize a vehicle is needed apart from the existing plate number recognition system. The detection and recognition of the vehicle type or model can be helpful in determining whether the vehicle is registered with the department of motor vehicle. Hence, this project aims at providing extra information with respect to the vehicle which is to determine the maker of the vehicles. In this project, the classification system is trained with 10 training images for each vehicle’s manufacturer. The common features for each logo will be extracted using the Speeded-Up Robust Features algorithm and then feature points will be grouped and arranged using Bag of Word representations which will then be clustered using K means clustering method. The vehicle’s classification will be determined by using Support Vector Machine classifier to classify and identify the logo of the vehicle. From the experimental results, the classification system achieved 87% and 77% for front view and side view images respectively with 1500, number of cluster

    Vehicle Logo Recognition by Spatial-SIFT Combined with Logistic Regression

    Get PDF
    An efficient recognition framework requires both good feature representation and effective classification methods. This paper proposes such a framework based on a spatial Scale Invariant Feature Transform (SIFT) combined with a logistic regression classifier. The performance of the proposed framework is compared to that of state-of-the-art methods based on the Histogram of Orientation Gradients, SIFT features, Support Vector Machine and K-Nearest Neighbours classifiers. By testing with the largest vehicle logo data-set, it is shown that the proposed framework can achieve a classification accuracy of 99.93%, the best among all studied methods. Moreover, the proposed framework shows robustness when noise is added in both training and testing images

    Vehicle make and model recognition for intelligent transportation monitoring and surveillance.

    Get PDF
    Vehicle Make and Model Recognition (VMMR) has evolved into a significant subject of study due to its importance in numerous Intelligent Transportation Systems (ITS), such as autonomous navigation, traffic analysis, traffic surveillance and security systems. A highly accurate and real-time VMMR system significantly reduces the overhead cost of resources otherwise required. The VMMR problem is a multi-class classification task with a peculiar set of issues and challenges like multiplicity, inter- and intra-make ambiguity among various vehicles makes and models, which need to be solved in an efficient and reliable manner to achieve a highly robust VMMR system. In this dissertation, facing the growing importance of make and model recognition of vehicles, we present a VMMR system that provides very high accuracy rates and is robust to several challenges. We demonstrate that the VMMR problem can be addressed by locating discriminative parts where the most significant appearance variations occur in each category, and learning expressive appearance descriptors. Given these insights, we consider two data driven frameworks: a Multiple-Instance Learning-based (MIL) system using hand-crafted features and an extended application of deep neural networks using MIL. Our approach requires only image level class labels, and the discriminative parts of each target class are selected in a fully unsupervised manner without any use of part annotations or segmentation masks, which may be costly to obtain. This advantage makes our system more intelligent, scalable, and applicable to other fine-grained recognition tasks. We constructed a dataset with 291,752 images representing 9,170 different vehicles to validate and evaluate our approach. Experimental results demonstrate that the localization of parts and distinguishing their discriminative powers for categorization improve the performance of fine-grained categorization. Extensive experiments conducted using our approaches yield superior results for images that were occluded, under low illumination, partial camera views, or even non-frontal views, available in our real-world VMMR dataset. The approaches presented herewith provide a highly accurate VMMR system for rea-ltime applications in realistic environments.\\ We also validate our system with a significant application of VMMR to ITS that involves automated vehicular surveillance. We show that our application can provide law inforcement agencies with efficient tools to search for a specific vehicle type, make, or model, and to track the path of a given vehicle using the position of multiple cameras

    Machine Learning Methods for Autonomous Object Recognition and Restoration in Images

    Get PDF
    Image recognition and image restoration are important tasks in the field of image processing. Image recognition are becoming very popular due to the state-of-the-art deep learning methods. However, these models usually require big datasets and high computational costs, which could be challenging. This thesis proposes an online learning framework that deals with both small and big datasets. For small datasets, a Cauchy prior logistic regression classifier is proposed to provide a quick convergence, and the online weight updating scheme is efficient due to the previously trained weights being reused. For big datasets, convolutional neural network could be implemented. For image recognition, non-parametric classifiers are often used for image recognition such as K-nearest neighbours, however, K-nearest neighbours are vulnerable to noise and high dimensional features. This thesis proposes a non-parametric classifier based on Bayesian compressive sensing; the developed classifier is robust and it does not need a training stage. For image restoration, which is usually performed before image recognition as a preprocessing process. This thesis proposes such a joint framework that performs image recognition and restoration simultaneously. In image restoration, image rotation and occlusion are common problems but convolutional neural networks are not suitable to solve these due to the limitation of the convolutional process and pooling process. This thesis develops a joint framework based on capsule networks. The developed joint capsule framework could achieve a good result on recognition, image de-noising, recovering rotation and removing occlusion. The developed algorithms have been evaluated for vehicle logo restoration and recognition, however, they are transferable to other implementations. This thesis also developed an automatic detection and recognition framework for badger monitoring for the first time. Badger plays a key role in the transmission of bovine tuberculosis, which is described by government as the most pressing animal health problem in the UK. An automatic badger monitoring system could help researcher to understand the transmission mechanisms and thereby to develop methods to deal with the transmission between species

    Information embedding and retrieval in 3D printed objects

    Get PDF
    Deep learning and convolutional neural networks have become the main tools of computer vision. These techniques are good at using supervised learning to learn complex representations from data. In particular, under limited settings, the image recognition model now performs better than the human baseline. However, computer vision science aims to build machines that can see. It requires the model to be able to extract more valuable information from images and videos than recognition. Generally, it is much more challenging to apply these deep learning models from recognition to other problems in computer vision. This thesis presents end-to-end deep learning architectures for a new computer vision field: watermark retrieval from 3D printed objects. As it is a new area, there is no state-of-the-art on many challenging benchmarks. Hence, we first define the problems and introduce the traditional approach, Local Binary Pattern method, to set our baseline for further study. Our neural networks seem useful but straightfor- ward, which outperform traditional approaches. What is more, these networks have good generalization. However, because our research field is new, the problems we face are not only various unpredictable parameters but also limited and low-quality training data. To address this, we make two observations: (i) we do not need to learn everything from scratch, we know a lot about the image segmentation area, and (ii) we cannot know everything from data, our models should be aware what key features they should learn. This thesis explores these ideas and even explore more. We show how to use end-to-end deep learning models to learn to retrieve watermark bumps and tackle covariates from a few training images data. Secondly, we introduce ideas from synthetic image data and domain randomization to augment training data and understand various covariates that may affect retrieve real-world 3D watermark bumps. We also show how the illumination in synthetic images data to effect and even improve retrieval accuracy for real-world recognization applications

    A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex

    Get PDF
    We describe a quantitative theory to account for the computations performed by the feedforward path of the ventral stream of visual cortex and the local circuits implementing them. We show that a model instantiating the theory is capable of performing recognition on datasets of complex images at the level of human observers in rapid categorization tasks. We also show that the theory is consistent with (and in some case has predicted) several properties of neurons in V1, V4, IT and PFC. The theory seems sufficiently comprehensive, detailed and satisfactory to represent an interesting challenge for physiologists and modelers: either disprove its basic features or propose alternative theories of equivalent scope. The theory suggests a number of open questions for visual physiology and psychophysics

    Exploiting Spatio-Temporal Coherence for Video Object Detection in Robotics

    Get PDF
    This paper proposes a method to enhance video object detection for indoor environments in robotics. Concretely, it exploits knowledge about the camera motion between frames to propagate previously detected objects to successive frames. The proposal is rooted in the concepts of planar homography to propose regions of interest where to find objects, and recursive Bayesian filtering to integrate observations over time. The proposal is evaluated on six virtual, indoor environments, accounting for the detection of nine object classes over a total of ∼ 7k frames. Results show that our proposal improves the recall and the F1-score by a factor of 1.41 and 1.27, respectively, as well as it achieves a significant reduction of the object categorization entropy (58.8%) when compared to a two-stage video object detection method used as baseline, at the cost of small time overheads (120 ms) and precision loss (0.92).</p

    WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION

    Get PDF
    Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern & Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of small-scale farmers in Africa continue to consult some forms of weather lore to reach various cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013), associated with the prediction of the weather, and based on indigenous knowledge and human observation of the environment. As such, it tends to be more holistic, and more localized to the farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer forecasts beyond a season. Different types of weather lore exist, utilizing almost all available human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it is the visual or observed weather lore that is mostly used by indigenous societies, to come up with weather predictions. On the other hand, meteorologists continue to treat this knowledge as superstition, partly because there is no means to scientifically evaluate and validate it. The visualization and characterization of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are significant subjects of research. To realize the integration of visual weather lore in modern weather forecasting systems, there is a need to represent and scientifically substantiate this form of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by traditional communities to predict weather conditions. To realize this verification, fuzzy cognitive mapping was used to model and represent causal relationships between selected visual weather lore concepts and weather conditions. The traditional knowledge used to produce these maps was attained through case studies of two communities (in Kenya and South Africa).These case studies were aimed at understanding the weather lore domain as well as the causal effects between metrological and visual weather lore. In this study, common astronomical weather lore factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather, dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects captured using a sky camera, while pattern recognition was employed in benchmarking and scoring the objects. A wireless weather station was used to capture real-time weather parameters. The visualization tool was then designed and realized in a form of software artefact, which integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather lore, and verification using various statistical forecast skills and metrics. The tool consists of four main sub-components: (1) Machine vision that recognizes sky objects using support vector machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian learning algorithm was used to learn until convergence); and (4) A statistical computing component was used for verifications and forecast skills including brier score and contingency tables for deterministic forecasts. Rigorous evaluation of the verification tool was carried out using independent (not used in the training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya. The real-time images were captured using a sky camera with GPS location services. The results of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were over 80%). The recommendation in this study is to apply the implemented method for processing tasks, towards verifying all other types of visual weather lore. In addition, the use of the method developed also requires the implementation of modules for processing and verifying other types of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have continued to rely on weather lore observations to predict seasonal weather as well as its effects on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences in observing weather conditions. However, when it comes to predictions for longer lead-times (i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has partly contributed to the current status where meteorologists and other scientists continue to treat weather lore as superstition (United-Nations, 2004), and not capable of predicting weather. One of the problems in testing the confidence in weather lore in predicting weather is due to wide varieties of weather lore that are found in the details of indigenous sayings, which are tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge is entrenched within the day-to-day socio-economic activities of the communities using it and is not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik, 2004). Further, this knowledge is based on local experience that lacks benchmarking techniques; so that harmonizing and integrating it within the science-based weather forecasting systems is a daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of validation of weather lore has not yet been substantially investigated. Sufficient expanded processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it is incorporated into modern weather prediction systems. Validation of traditional knowledge is a necessary step in the management of building integrated knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different forms as identified by traditional communities; hence it needs to be tied together for comparison and validation. The development of a weather lore validation tool that can integrate a framework for acquiring weather data and methods of representing the weather lore in verifiable forms can be a significant step in the validation of weather lore against actual weather records using conventional weather-observing instruments. The success of validating weather lore could stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather prediction. In this study a hybrid method is developed that includes computer vision and fuzzy cognitive mapping techniques for verifying visual weather lore. The verification tool was designed with forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive knowledge of humans. The method provides meaning to humanly perceivable sky objects so that computers can understand, interpret, and approximate visual weather outcomes. Questionnaires were administered in two case study locations (KwaZulu-Natal province in South Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The two case studies were conducted by interviewing respondents on how visual astronomical and meteorological weather concepts cause weather outcomes. The two case studies were used to identify causal effects of visual astronomical and meteorological objects to weather conditions. This was followed by finding variations and comparisons, between the visual weather lore knowledge in the two case studies. The results from the two case studies were aggregated in terms of seasonal knowledge. The causal links between visual weather concepts were investigated using these two case studies; results were compared and aggregated to build up common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts. The modelling of the weather lore verification tool consists of input, processing components and output. The input data to the system are sky image scenes and actual weather observations from wireless weather sensors. The image recognition component performs three sub-tasks, including: detection of objects (concepts) from image scenes, extraction of detected objects, and approximation of the presence of the concepts by comparing extracted objects to ideal objects. The prediction process involves the use of approximated concepts generated in the recognition component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps. The verification component evaluates the variation between the predictions and actual weather observations to determine prediction errors and accuracy. To evaluate the tool, daily system simulations were run to predict and record probabilities of weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the predicted weather outcomes, the actual weather observations (measurement) were transformed and normalized to a range [0, 1].In the verification process, comparisons were made between the actual observations and weather outcome prediction values by computing residuals (error values) from the observations. The error values and the squared error were used to compute the Mean Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather outcome. Finally, the validity of the visual weather lore verification model was assessed using data from a different geographical location. Actual data in the form of daily sky scenes and weather parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on the use of hybrid techniques for verification of weather lore is expected to provide an incentive in integrating indigenous knowledge on weather with modern numerical weather prediction systems for accurate and downscaled weather forecasts

    Personality Identification from Social Media Using Deep Learning: A Review

    Get PDF
    Social media helps in sharing of ideas and information among people scattered around the world and thus helps in creating communities, groups, and virtual networks. Identification of personality is significant in many types of applications such as in detecting the mental state or character of a person, predicting job satisfaction, professional and personal relationship success, in recommendation systems. Personality is also an important factor to determine individual variation in thoughts, feelings, and conduct systems. According to the survey of Global social media research in 2018, approximately 3.196 billion social media users are in worldwide. The numbers are estimated to grow rapidly further with the use of mobile smart devices and advancement in technology. Support vector machine (SVM), Naive Bayes (NB), Multilayer perceptron neural network, and convolutional neural network (CNN) are some of the machine learning techniques used for personality identification in the literature review. This paper presents various studies conducted in identifying the personality of social media users with the help of machine learning approaches and the recent studies that targeted to predict the personality of online social media (OSM) users are reviewed
    corecore