7 research outputs found

    DART: Distribution Aware Retinal Transform for Event-based Cameras

    Full text link
    We introduce a generic visual descriptor, termed as distribution aware retinal transform (DART), that encodes the structural context using log-polar grids for event cameras. The DART descriptor is applied to four different problems, namely object classification, tracking, detection and feature matching: (1) The DART features are directly employed as local descriptors in a bag-of-features classification framework and testing is carried out on four standard event-based object datasets (N-MNIST, MNIST-DVS, CIFAR10-DVS, NCaltech-101). (2) Extending the classification system, tracking is demonstrated using two key novelties: (i) For overcoming the low-sample problem for the one-shot learning of a binary classifier, statistical bootstrapping is leveraged with online learning; (ii) To achieve tracker robustness, the scale and rotation equivariance property of the DART descriptors is exploited for the one-shot learning. (3) To solve the long-term object tracking problem, an object detector is designed using the principle of cluster majority voting. The detection scheme is then combined with the tracker to result in a high intersection-over-union score with augmented ground truth annotations on the publicly available event camera dataset. (4) Finally, the event context encoded by DART greatly simplifies the feature correspondence problem, especially for spatio-temporal slices far apart in time, which has not been explicitly tackled in the event-based vision domain.Comment: 12 pages, revision submitted to TPAMI in Nov 201

    The Dual Codebook:Combining Bags of Visual Words in Image Classification

    Get PDF
    In this paper, we evaluate the performance of two conventional bag of words approaches, using two basic local feature descriptors, to perform image classification. These approaches are compared to a novel design which combines two bags of visual words, using two different feature descriptors. The system extends earlier work wherein a bag of visual words approach with an L2 support vector machine classifier outperforms several alternatives. The descriptors we test are raw pixel intensities and the Histogram of Oriented Gradients. Using a novel Primal Support Vector Machine as a classifier, we perform image classification on the CIFAR-10 and MNIST datasets. Results show that the dual codebook implementation successfully utilizes the potential contributive information encapsulated by an alternative feature descriptor and increases performance, improving classification by 5-18% on CIFAR-10, and 0.22-1.03% for MNIST compared to the simple bag of words approaches

    The First Sign: Detecting Future Financial Fraud from the IPO Prospectus

    Get PDF
    In this study, I examine whether it is possible to predict future financial statement fraud using disclosure content prior to the fraud. Specifically, I employ a machine learning algorithm to construct a unique measure based on the lexical cues embedded within a firm’s first public disclosure, the Management’s Discussion and Analysis section of the S-1 filing, during the Initial Public Offering process. I use this measure to predict whether a firm that is not already committing fraud will commit fraud within five years of the Initial Public Offering (IPO) that results in an Accounting or Enforcement Release (AAER). I find there is information within the S-1 filing that is useful in the prediction of out-of-sample fraud. Additionally, I find that the measure performs better than both benchmark measures from prior literature and a new measure using quantitative information, when using information available at the S-1 date. Furthermore, the lexical cues measure performs well in predicting fraud relative to the benchmark measures even after updating the benchmark measures with misstated annual filings to aid their (but not my measure’s) fraud detection abilities. I find that my new measure is not limited to only predicting AAER based misconduct, but that the out-of-sample results hold when using an alternate sample based on 10(b)-5 filings as well as a comprehensive set of quantitative variables. Lastly, my measure identifies firms more likely to manage earnings to meet/beat analyst forecasts, firms who experience higher levels of information asymmetry around earnings announcements within the five years following the IPO, and has some predictive ability over future abnormal returns

    Shape classification using invariant features and contextual information in the bag-of-words model

    No full text
    In this paper, we describe a classification framework for binary shapes that have scale, rotation and strong viewpoint variations. To this end, we develop several novel techniques. First, we employ the spectral magnitude of log-polar transform as a local feature in the bag-of-words model. Second, we incorporate contextual information in the bag-of-words model using a novel method to extract bi-grams from the spatial co-occurrence matrix. Third, a novel metric termed ‘weighted gain ratio’ is proposed to select a suitable codebook size in the bag-of-words model. The proposed metric is generic, and hence it can be used for any clustering quality evaluation task. Fourth, a joint learning framework is proposed to learn features in a data-driven manner, and thus avoid manual fine-tuning of the model parameters. We test our shape classification system on the animal shapes dataset and significantly outperform state-of-the-art methods in the literature

    WEATHER LORE VALIDATION TOOL USING FUZZY COGNITIVE MAPS BASED ON COMPUTER VISION

    Get PDF
    Published ThesisThe creation of scientific weather forecasts is troubled by many technological challenges (Stern & Easterling, 1999) while their utilization is generally dismal. Consequently, the majority of small-scale farmers in Africa continue to consult some forms of weather lore to reach various cropping decisions (Baliscan, 2001). Weather lore is a body of informal folklore (Enock, 2013), associated with the prediction of the weather, and based on indigenous knowledge and human observation of the environment. As such, it tends to be more holistic, and more localized to the farmers’ context. However, weather lore has limitations; for instance, it has an inability to offer forecasts beyond a season. Different types of weather lore exist, utilizing almost all available human senses (feel, smell, sight and hearing). Out of all the types of weather lore in existence, it is the visual or observed weather lore that is mostly used by indigenous societies, to come up with weather predictions. On the other hand, meteorologists continue to treat this knowledge as superstition, partly because there is no means to scientifically evaluate and validate it. The visualization and characterization of visual sky objects (such as moon, clouds, stars, and rainbows) in forecasting weather are significant subjects of research. To realize the integration of visual weather lore in modern weather forecasting systems, there is a need to represent and scientifically substantiate this form of knowledge. This research was aimed at developing a method for verifying visual weather lore that is used by traditional communities to predict weather conditions. To realize this verification, fuzzy cognitive mapping was used to model and represent causal relationships between selected visual weather lore concepts and weather conditions. The traditional knowledge used to produce these maps was attained through case studies of two communities (in Kenya and South Africa).These case studies were aimed at understanding the weather lore domain as well as the causal effects between metrological and visual weather lore. In this study, common astronomical weather lore factors related to cloud physics were identified as: bright stars, dispersed clouds, dry weather, dull stars, feathery clouds, gathering clouds, grey clouds, high clouds, layered clouds, low clouds, stars, medium clouds, and rounded clouds. Relationships between the concepts were also identified and formally represented using fuzzy cognitive maps. On implementing the verification tool, machine vision was used to recognize sky objects captured using a sky camera, while pattern recognition was employed in benchmarking and scoring the objects. A wireless weather station was used to capture real-time weather parameters. The visualization tool was then designed and realized in a form of software artefact, which integrated both computer vision and fuzzy cognitive mapping for experimenting visual weather lore, and verification using various statistical forecast skills and metrics. The tool consists of four main sub-components: (1) Machine vision that recognizes sky objects using support vector machine classifiers using shape-based feature descriptors; (2) Pattern recognition–to benchmark and score objects using pixel orientations, Euclidean distance, canny and grey-level concurrence matrix; (3) Fuzzy cognitive mapping that was used to represent knowledge (i.e. active hebbian learning algorithm was used to learn until convergence); and (4) A statistical computing component was used for verifications and forecast skills including brier score and contingency tables for deterministic forecasts. Rigorous evaluation of the verification tool was carried out using independent (not used in the training and testing phases) real-time images from Bloemfontein, South Africa, and Voi-Kenya. The real-time images were captured using a sky camera with GPS location services. The results of the implementation were tested for the selected weather conditions (for example, rain, heat, cold, and dry conditions), and found to be acceptable (the verified prediction accuracies were over 80%). The recommendation in this study is to apply the implemented method for processing tasks, towards verifying all other types of visual weather lore. In addition, the use of the method developed also requires the implementation of modules for processing and verifying other types of weather lore, such as sounds, and symbols of nature. Since time immemorial, from Australia to Asia, Africa to Latin America, local communities have continued to rely on weather lore observations to predict seasonal weather as well as its effects on their livelihoods (Alcock, 2014). This is mainly based on many years of personal experiences in observing weather conditions. However, when it comes to predictions for longer lead-times (i.e. over a season), weather lore is uncertain (Hornidge & Antweiler, 2012). This uncertainty has partly contributed to the current status where meteorologists and other scientists continue to treat weather lore as superstition (United-Nations, 2004), and not capable of predicting weather. One of the problems in testing the confidence in weather lore in predicting weather is due to wide varieties of weather lore that are found in the details of indigenous sayings, which are tightly coupled to locality and pattern variations(Oviedo et al., 2008). This traditional knowledge is entrenched within the day-to-day socio-economic activities of the communities using it and is not globally available for comparison and validation (Huntington, Callaghan, Fox, & Krupnik, 2004). Further, this knowledge is based on local experience that lacks benchmarking techniques; so that harmonizing and integrating it within the science-based weather forecasting systems is a daunting task (Hornidge & Antweiler, 2012). It is partly for this reason that the question of validation of weather lore has not yet been substantially investigated. Sufficient expanded processes of gathering weather observations, combined with comparison and validation, can produce some useful information. Since forecasting weather accurately is a challenge even with the latest supercomputers (BBC News Magazine, 2013), validated weather lore can be useful if it is incorporated into modern weather prediction systems. Validation of traditional knowledge is a necessary step in the management of building integrated knowledge-based systems. Traditional knowledge incorporated into knowledge-based systems has to be verified for enhancing systems’ reliability. Weather lore knowledge exists in different forms as identified by traditional communities; hence it needs to be tied together for comparison and validation. The development of a weather lore validation tool that can integrate a framework for acquiring weather data and methods of representing the weather lore in verifiable forms can be a significant step in the validation of weather lore against actual weather records using conventional weather-observing instruments. The success of validating weather lore could stimulate the opportunity for integrating acceptable weather lore with modern systems of weather prediction to improve actionable information for decision making that relies on seasonal weather prediction. In this study a hybrid method is developed that includes computer vision and fuzzy cognitive mapping techniques for verifying visual weather lore. The verification tool was designed with forecasting based on mimicking visual perception, and fuzzy thinking based on the cognitive knowledge of humans. The method provides meaning to humanly perceivable sky objects so that computers can understand, interpret, and approximate visual weather outcomes. Questionnaires were administered in two case study locations (KwaZulu-Natal province in South Africa, and Taita-Taveta County in Kenya), between the months of March and July 2015. The two case studies were conducted by interviewing respondents on how visual astronomical and meteorological weather concepts cause weather outcomes. The two case studies were used to identify causal effects of visual astronomical and meteorological objects to weather conditions. This was followed by finding variations and comparisons, between the visual weather lore knowledge in the two case studies. The results from the two case studies were aggregated in terms of seasonal knowledge. The causal links between visual weather concepts were investigated using these two case studies; results were compared and aggregated to build up common knowledge. The joint averages of the majority of responses from the case studies were determined for each set of interacting concepts. The modelling of the weather lore verification tool consists of input, processing components and output. The input data to the system are sky image scenes and actual weather observations from wireless weather sensors. The image recognition component performs three sub-tasks, including: detection of objects (concepts) from image scenes, extraction of detected objects, and approximation of the presence of the concepts by comparing extracted objects to ideal objects. The prediction process involves the use of approximated concepts generated in the recognition component to simulate scenarios using the knowledge represented in the fuzzy cognitive maps. The verification component evaluates the variation between the predictions and actual weather observations to determine prediction errors and accuracy. To evaluate the tool, daily system simulations were run to predict and record probabilities of weather outcomes (i.e. rain, heat index/hotness, dry, cold index). Weather observations were captured periodically using a wireless weather station. This process was repeated several times until there was sufficient data to use for the verification process. To match the range of the predicted weather outcomes, the actual weather observations (measurement) were transformed and normalized to a range [0, 1].In the verification process, comparisons were made between the actual observations and weather outcome prediction values by computing residuals (error values) from the observations. The error values and the squared error were used to compute the Mean Squared Error (MSE), and the Root Mean Squared Error (RMSE), for each predicted weather outcome. Finally, the validity of the visual weather lore verification model was assessed using data from a different geographical location. Actual data in the form of daily sky scenes and weather parameters were acquired from Voi, Kenya, from December 2015 to January 2016.The results on the use of hybrid techniques for verification of weather lore is expected to provide an incentive in integrating indigenous knowledge on weather with modern numerical weather prediction systems for accurate and downscaled weather forecasts

    Large scale visual search

    Get PDF
    With the ever-growing amount of image data on the web, much attention has been devoted to large scale image search. It is one of the most challenging problems in computer vision for several reasons. First, it must address various appearance transformations such as changes in perspective, rotation and scale existing in the huge amount of image data. Second, it needs to minimize memory requirements and computational cost when generating image representations. Finally, it needs to construct an efficient index space and a suitable similarity measure to reduce the response time to the users. This thesis aims to provide robust image representations that are less sensitive to above mentioned appearance transformations and are suitable for large scale image retrieval. Although this thesis makes a substantial number of contributions to large scale image retrieval, we also presented additional challenges and future research based on the contributions in this thesis.China Scholarship Council (CSC)Computer Systems, Imagery and Medi

    Deep learning for animal recognition

    Get PDF
    Deep learning has obtained many successes in different computer vision tasks such as classification, detection, and segmentation of objects or faces. Many of these successes can be ascribed to training deep convolutional neural network architectures on a dataset containing many images. Limited research has explored deep learning methods for performing recognition or detection of animals using a limited number of images. This thesis examines the use of different deep learning techniques and conventional computer vision methods for performing animal recognition or detection with relatively small training datasets and has the following objectives: 1) Analyse the performance of deep learning systems compared to classical approaches when there exists a limited number of images of animals; 2) Develop an algorithm for effectively dealing with rotation variation naturally present in aerial images; 3) Construct a computer vision system that is more robust to illumination variation; 4) Analyse how important the use of different color spaces is in deep learning; 5) Compare different deep convolutional neural-network algorithms for detecting and recognizing individual instances (identities) in a group of animals, for example, badgers. For most of the experiments, effectively reduced neural network recognition systems are used, which are derived from existing architectures. These reduced systems are compared to standard architectures and classical computer vision methods. We also propose a color transformation algorithm, a novel rotation-matrix data-augmentation algorithm and a hybrid variant of such a method, that factors color constancy with the aim to enhance images and construct a system that is more robust to different kinds of visual appearances. The results show that our proposed algorithms aid deep learning systems to become more accurate in classifying animals for a large number of different animal datasets. Furthermore, the developed systems yield performances that significantly surpass classical computer vision techniques, even with limited amounts of available images for training
    corecore