18,546 research outputs found

    On the Graceful Cartesian Product of Alpha-Trees

    Get PDF
    A \emph{graceful labeling} of a graph GG of size nn is an injective assignment of integers from the set {0,1,,n}\{0,1,\dots,n\} to the vertices of GG such that when each edge has assigned a \emph{weight}, given by the absolute value of the difference of the labels of its end vertices, all the weights are distinct. A graceful labeling is called an α\alpha-labeling when the graph GG is bipartite, with stable sets AA and BB, and the labels assigned to the vertices in AA are smaller than the labels assigned to the vertices in BB. In this work we study graceful and α\alpha-labelings of graphs. We prove that the Cartesian product of two α\alpha-trees results in an α\alpha-tree when both trees admit α\alpha-labelings and their stable sets are balanced. In addition, we present a tree that has the property that when any number of pendant vertices are attached to the vertices of any subset of its smaller stable set, the resulting graph is an α\alpha-tree. We also prove the existence of an α\alpha-labeling of three types of graphs obtained by connecting, sequentially, any number of paths of equal size

    Measuring Possible Future Selves: Using Natural Language Processing for Automated Analysis of Posts about Life Concerns

    Get PDF
    Individuals have specific perceptions regarding their lives pertaining to how well they are doing in particular life domains, what their ideas are, and what to pursue in the future. These concepts are called possible future selves (PFS), a schema that contains the ideas of people, who they currently are, and who they wish to be in the future. The goal of this research project is to create a program to capture PFS using natural language processing. This program will allow automated analysis to measure people's perceptions and goals in a particular life domain and assess their view of the importance regarding their thoughts on each part of their PFS. The data used in this study were adopted from Kennard, Willis, Robinson, and Knobloch-Westerwick (2015) in which 214 women, aged between 21-35 years, viewed magazine portrayals of women in gender-congruent and gender-incongruent roles. The participants were prompted to write about their PFS with the questions: "Over the past 7 days, how much have you thought about your current life situation and your future? What were your thoughts? How much have you thought about your goals in life and your relationships? What were your thoughts?" The text PFS responses were then coded for mentions of different life domains and the emotions explicitly expressed from the text-data by human coders. Combinations of machine learning techniques were utilized to show the robustness of machine learning in predicting PFS. Long Short-Term Memory networks (LSTM), Convolutional Neural Networks (CNN), and decision trees were used in the ensemble learning of the machine learning model. Two different training and evaluation methods were used to find the most optimal machine learning approach in analyzing PFS. The machine learning approach was found successful in predicting PFS with high accuracy, labeling a person's concerns over PFS the same as human coders have done in The Allure of Aphrodite. While the models were inaccurate in spotting some measures, for example labeling a person's career concern in the present with around 60% accuracy, it was accurate finding a concern in a person's past romantic life with above 95% accuracy. Overall, the accuracy was found to be around 83% for life-domain concerns.Undergraduate Research Scholarship by the College of EngineeringNo embargoAcademic Major: Computer Science and Engineerin

    Potts model, parametric maxflow and k-submodular functions

    Full text link
    The problem of minimizing the Potts energy function frequently occurs in computer vision applications. One way to tackle this NP-hard problem was proposed by Kovtun [19,20]. It identifies a part of an optimal solution by running kk maxflow computations, where kk is the number of labels. The number of "labeled" pixels can be significant in some applications, e.g. 50-93% in our tests for stereo. We show how to reduce the runtime to O(logk)O(\log k) maxflow computations (or one {\em parametric maxflow} computation). Furthermore, the output of our algorithm allows to speed-up the subsequent alpha expansion for the unlabeled part, or can be used as it is for time-critical applications. To derive our technique, we generalize the algorithm of Felzenszwalb et al. [7] for {\em Tree Metrics}. We also show a connection to {\em kk-submodular functions} from combinatorial optimization, and discuss {\em kk-submodular relaxations} for general energy functions.Comment: Accepted to ICCV 201

    Multilingual Twitter Sentiment Classification: The Role of Human Annotators

    Get PDF
    What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered
    corecore