9 research outputs found

    Improving Classification Accuracy Using Clustering Technique

    Get PDF
    Product classification is the key issue in e-commerce domains. Many products are released to the market rapidly and to select the correct category in taxonomy for each product has become a challenging task. The application of classification model is useful to precisely classify the products. The study proposed a method to apply clustering prior to classification. This study has used a large-scale real-world data set to identify the efficiency of clustering technique to improve the classification model. The conventional text classification procedures are used in the study such as preprocessing, feature extraction and feature selection before applying the clustering technique. Results show that clustering technique improves the accuracy of the classification model. The best classification model for all three approaches which are classification model only, classification with hierarchical clustering and classification with K-means clustering is K-Nearest Neighbor (KNN) model. Even though the accuracy of the KNN models are the same across different approaches but the KNN model with K-means clustering had the shortest time of execution. Hence, applying K-means clustering prior to KNN model helps in reducing the computation time

    Research on the Construction of Sales Forecasting Model of Fashion Products Based on Feature Representation of Multimodal and Deep Learning

    Get PDF
    By improving the accuracy of sales forecasting, this paper provides support for fashion product sales enterprises to make better inventory management and operational decisions. The deep neural network is introduced into the construction of multimodal features, and the internal structure of different modes, such as historical sales features, picture features, and basic attribute features of products, are fully considered, and finally the sales forecasting model of fashion products based on multimodal feature fusion is constructed. In addition, combined with the actual data of the enterprise, the proposed model is compared with the exponential regression model and shallow neural network model. The paper finds that multimodal features and deep learning representation method has better performance than traditional methods (exponential regression and shallow neural network) in the task of predicting sales of fashion products. The results help enterprises use the deep learning method and the data of multiple modal to make accurate sales forecast

    Essays on Representation Learning for Political Science Research

    Full text link
    This dissertation consists of three papers about leveraging representation learning for political science research. Representation learning refers to techniques that learn a mapping between input data and a feature vector or tensor with respect to a task, such as classification or regression. These vectors or tensors capture abstract and relevant concepts in the data, making it easier to extract information. In the three papers, I show how representation learning allows political scientists to work with complex data such as text and images effectively. In the first paper, I propose using word embeddings to calculate partisan associations from Twitter users' bios. It only requires that some users in the corpus of tweets use partisan words in their bios. Intuitively, the word embeddings learn associations between non-partisan and partisan words from bios and extend those associations to all users. I apply the method to a collection of users who tweeted about election incidents during the 2016 United States general election. Which partisan accounts get retweeted, favorited, and followed, and which partisan hashtags are used closely correlate with the partisan association scores. I also apply the method to users who tweeted about masks during the COVID-19 pandemic. I find that users with more Democratic-leaning partisan association scores are more likely to use health advocacy hashtags, such as #MaskUp. In the second paper, I look at the automated classification of observations with both images and text. Most state-of-the-art vision-and-language models are unusable for most political science research, as they require all observations to have both image and text and require computationally expensive pretraining. This paper proposes a novel vision-and-language framework called multimodal representations using modality translation, or MARMOT. MARMOT presents two methodological contributions: it constructs representations for observations missing image or text, and it replaces computationally expensive pretraining with modality translation. Modality translation learns the patterns between images and their captions. MARMOT outperforms an ensemble text-only classifier in 19 of 20 categories in multilabel classifications of tweets reporting election incidents during the 2016 U.S. general election. MARMOT also shows significant improvements over the results of benchmark multimodal models on the Hateful Memes dataset, improving the best accuracy and area under the receiver operating characteristic curve (AUC) set by VisualBERT from 0.6473 to 0.6760 and 0.7141 to 0.7530, respectively. In the third paper, I turn to the issue of computationally studying language usage evolution over time. The corpora that political scientists typically work with are much smaller than the extensive corpora used in natural language processing research. Training a word embedding space over each period, the usual approach to studying language usage evolution, worsens the problem by splitting up the corpus into even smaller corpora. This paper proposes a framework that uses pretrained and non-pretrained embeddings to learn time-specific word embeddings, called the pretrained-augmented embeddings (PAE) framework. In the first application, I apply the PAE framework to a corpus of New York Times text data spanning several decades. The PAE framework matches human judgments of how specific words evolve in their usage much more closely than existing methods. In the second application, I apply the PAE framework to a corpus of tweets written during the COVID-19 pandemic about masking. The PAE framework automatically detects shifts in discussions about specific events during the COVID-19 pandemic vis-a-vis the keyword of interest.PHDPolitical ScienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169642/1/pywu_1.pd
    corecore