193 research outputs found

    Image classification by visual bag-of-words refinement and reduction

    Full text link
    This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification. Although very influential in the literature, the traditional visual BOW model has two distinct drawbacks. Firstly, for efficiency purposes, the visual vocabulary is commonly constructed by directly clustering the low-level visual feature vectors extracted from local keypoints, without considering the high-level semantics of images. That is, the visual BOW model still suffers from the semantic gap, and thus may lead to significant performance degradation in more challenging tasks (e.g. social image classification). Secondly, typically thousands of visual words are generated to obtain better performance on a relatively large image dataset. Due to such large vocabulary size, the subsequent image classification may take sheer amount of time. To overcome the first drawback, we develop a graph-based method for visual BOW refinement by exploiting the tags (easy to access although noisy) of social images. More notably, for efficient image classification, we further reduce the refined visual BOW model to a much smaller size through semantic spectral clustering. Extensive experimental results show the promising performance of the proposed framework for visual BOW refinement and reduction

    Simple Modelling of Undrained Shear Response of Granular Materials

    Get PDF
    A vast amount of past experimental investigations reported that the internal peak angle of sand was jointly governed by the density and effective stress level. Several relationships were proposed between these elements. The dependence of dilatancy characteristics on the internal state of a granular material was examined and revealed. A simple constitutive model framework was established on a basis of several well-proven and experienced relationships for granular materials to simulate their undrained shear behaviour. A basic hardening law connecting the varying tendency of the stress ratio with shear strain was employed. This model is capable of predicting the undrained monotonic stress-strain relationship of granular materials at different densities and various confining pressures. A series of parametric studies are conducted to investigate the susceptibility of the simulation results to the selected parameters. The simulation results also confirm the influential influences of dilatancy and deformability on the shear characteristics of granular materials at the critical state

    Evaluation of soybean cyst nematode (SCN) resistance in perennial glycine species and genome-wide association mapping and genomic prediction study for SCN resistance in common bean and prediction of the short distance movement of soybean rust urediniospores through machine learning

    Get PDF
    Since agriculture started, there have been numerous occasions when plant diseases of crops had severe impact on human activities. From the famine caused by potato late blight (Phytophthora infestans) in Ireland in 1846, to the dramatic economic loss caused by downy mildew of grapes (Plasmopara viticola) in the Mediterranean in 1865, to the loss of the valuable banana cultivar ‘Gros Michel’ caused by Fusarium oxysporum Schlect. f. sp. cubense, plant diseases have caused significant historical and economic importance. The goal of plant disease management is to reduce the economic and aesthetic damage caused by plant diseases, and the focus of my thesis centers around studying diseases and their pathogen in an effort to supplement long-term effective management strategies for important diseases of soybean. Soybean cyst nematode (SCN; Heterodera glycines; HG) is a widely occurring and damaging pathogen with a wide host range. SCN is the leading cause of soybean yield loss in the US and it will likely become a major yield-limiting threat to common bean (Phaseolus vulgaris L.), another highly susceptible host of SCN. Developing resistant cultivars is the most cost-effective method for managing this disease. In the first chapter of my thesis, I focused on identifying additional sources of resistance to SCN in perennial Glycine species which can be potentially used for improving resistance of soybean to SCN. 13 perennial Glycine species of 282 PIs were inoculated with HG types 0, 2, and 1.2.3 first, and then 36 PIs out of this set were further evaluated by inoculating with HG type 1.2.3.4.5.6.7, a population that overcomes all the resistance genes in soybean. The Glycine species evaluated contains many PIs that are highly resistant to SCN with 10 species classified as immune or highly resistance to three HG types, indicating a much broader resistance in these PIs. With additional work on hybridizing the perennial Glycine species and soybean along with techniques of gene cloning and gene transfer, many of the genes in the perennial Glycine species could be used to develop additional soybean genotypes with SCN resistance. In the second chapter of my thesis, genome-wide association study (GWAS) was used to detect SNPs significantly associated with SCN resistance in the core collection of P. vulgaris and to make genomic predictions (GPs) of SCN resistance to two HG types. GWAS identified SNPs that are significantly associated with resistance to two HG types, and GP for resistance to two SCN HG types achieved high prediction accuracy. The findings in this chapter demonstrated GWAS and GP as valuable tools for developing new resistant common bean varieties with SCN resistance in the future. Epidemiology studies concerning the environmental and biological factors affecting disease entry, establishment and development are also extremely important for the successful management of diseases. The third chapter of my thesis focuses on developing mathematical models to predict the disease epidemic of soybean rust (Phakopsora packyrhizi), another devastating fungal disease of soybean with rapid establishment and development in the fields, using environmental and biological variables. Four machine learning models, including Absolute Shrinkage and Selection Operator (LASSO) method, zero-inflated Poisson/regular Poisson regression model, random forest, and neural network were built and compare to describe deposition of urediniospores collected in passive and active traps. The high prediction accuracy of some of the models demonstrated the applicability of machine learning in disease risk assessment, and the finding of this project is potentially helpful in guiding farmers to make proper and in-time disease management decisions

    Robust Image Analysis by L1-Norm Semi-supervised Learning

    Full text link
    This paper presents a novel L1-norm semi-supervised learning algorithm for robust image analysis by giving new L1-norm formulation of Laplacian regularization which is the key step of graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is defined directly over the eigenvectors of the normalized Laplacian matrix, we successfully formulate semi-supervised learning as an L1-norm linear reconstruction problem which can be effectively solved with sparse coding. By working with only a small subset of eigenvectors, we further develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding, the proposed algorithm can deal with the noise in the data to some extent and thus has important applications to robust image analysis, such as noise-robust image classification and noise reduction for visual and textual bag-of-words (BOW) models. In particular, this paper is the first attempt to obtain robust image representation by sparse co-refinement of visual and textual BOW models. The experimental results have shown the promising performance of the proposed algorithm.Comment: This is an extension of our long paper in ACM MM 201

    Lower Sodium Intake and Risk of Headaches: Results From the Trial of Nonpharmacologic Interventions in the Elderly.

    Get PDF
    ObjectivesTo determine the effect of sodium (Na) reduction on occurrence of headaches.MethodsIn the Trial of Nonpharmacologic Interventions in the Elderly, 975 men and woman (aged 60-80 years) with hypertension were randomized to a Na-reduction intervention or control group and were followed for up to 36 months. The study was conducted between 1992 and 1995 at 4 clinical centers (Johns Hopkins University, Wake Forest University School of Medicine, Robert Wood Johnson Medical School, and the University of Tennessee).ResultsMean difference in Na excretion between the Na-reduction intervention and control group was significant at each follow-up visit (P < .001) with an average difference of 38.8 millimoles per 24 hours. The occurrence of headaches was significantly lower in the Na-reduction intervention group (10.5%) compared with control (14.3%) with a hazard ratio of 0.59 (95% confidence interval = 0.40, 0.88; P = .009). The risk of headaches was significantly associated with average level of Na excretion during follow-up, independent of most recent blood pressure. The relationship appeared to be nonlinear with a spline relationship and a knot at 150 millimoles per 24 hours.ConclusionsReduced sodium intake, currently recommended for blood pressure control, may also reduce the occurrence of headaches in older persons with hypertension

    Efficient and Joint Hyperparameter and Architecture Search for Collaborative Filtering

    Full text link
    Automated Machine Learning (AutoML) techniques have recently been introduced to design Collaborative Filtering (CF) models in a data-specific manner. However, existing works either search architectures or hyperparameters while ignoring the fact they are intrinsically related and should be considered together. This motivates us to consider a joint hyperparameter and architecture search method to design CF models. However, this is not easy because of the large search space and high evaluation cost. To solve these challenges, we reduce the space by screening out usefulness yperparameter choices through a comprehensive understanding of individual hyperparameters. Next, we propose a two-stage search algorithm to find proper configurations from the reduced space. In the first stage, we leverage knowledge from subsampled datasets to reduce evaluation costs; in the second stage, we efficiently fine-tune top candidate models on the whole dataset. Extensive experiments on real-world datasets show better performance can be achieved compared with both hand-designed and previous searched models. Besides, ablation and case studies demonstrate the effectiveness of our search framework.Comment: Accepted by KDD 202
    corecore