30 research outputs found

    Feature Weaken: Vicinal Data Augmentation for Classification

    Full text link
    Deep learning usually relies on training large-scale data samples to achieve better performance. However, over-fitting based on training data always remains a problem. Scholars have proposed various strategies, such as feature dropping and feature mixing, to improve the generalization continuously. For the same purpose, we subversively propose a novel training method, Feature Weaken, which can be regarded as a data augmentation method. Feature Weaken constructs the vicinal data distribution with the same cosine similarity for model training by weakening features of the original samples. In especially, Feature Weaken changes the spatial distribution of samples, adjusts sample boundaries, and reduces the gradient optimization value of back-propagation. This work can not only improve the classification performance and generalization of the model, but also stabilize the model training and accelerate the model convergence. We conduct extensive experiments on classical deep convolution neural models with five common image classification datasets and the Bert model with four common text classification datasets. Compared with the classical models or the generalization improvement methods, such as Dropout, Mixup, Cutout, and CutMix, Feature Weaken shows good compatibility and performance. We also use adversarial samples to perform the robustness experiments, and the results show that Feature Weaken is effective in improving the robustness of the model.Comment: 9 pages,6 figure

    Assessing Lodging Severity over an Experimental Maize (Zea mays L.) Field Using UAS Images

    No full text
    Lodging has been recognized as one of the major destructive factors for crop quality and yield, resulting in an increasing need to develop cost-efficient and accurate methods for detecting crop lodging in a routine manner. Using structure-from-motion (SfM) and novel geospatial computing algorithms, this study investigated the potential of high resolution imaging with unmanned aircraft system (UAS) technology for detecting and assessing lodging severity over an experimental maize field at the Texas A&M AgriLife Research and Extension Center in Corpus Christi, Texas, during the 2016 growing season. The method was proposed to not only detect the occurrence of lodging at the field scale, but also to quantitatively estimate the number of lodged plants and the lodging rate within individual rows. Nadir-view images of the field trial were taken by multiple UAS platforms equipped with consumer grade red, green, and blue (RGB), and near-infrared (NIR) cameras on a routine basis, enabling a timely observation of the plant growth until harvesting. Models of canopy structure were reconstructed via an SfM photogrammetric workflow. The UAS-estimated maize height was characterized by polygons developed and expanded from individual row centerlines, and produced reliable accuracy when compared against field measures of height obtained from multiple dates. The proposed method then segmented the individual maize rows into multiple grid cells and determined the lodging severity based on the height percentiles against preset thresholds within individual grid cells. From the analysis derived from this method, the UAS-based lodging results were generally comparable in accuracy to those measured by a human data collector on the ground, measuring the number of lodging plants (R2 = 0.48) and the lodging rate (R2 = 0.50) on a per-row basis. The results also displayed a negative relationship of ground-measured yield with UAS-estimated and ground-measured lodging rate

    Inferring Human Activity in Mobile Devices by Computing Multiple Contexts

    No full text
    This paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is defined as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user’s mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution
    corecore