7 research outputs found

    Person re-identification using appearance

    No full text
    The aim of this project is to apply a person re-identification algorithm which uses trivial appearance cues like color and texture, and achieve satisfactory performance without train- ing, on various datasets being used by the lab. Firstly, a literature review was done and the tracking results were visualised. After this, we explored three methods - (i) dominant colors, (ii) color histograms and (iii) color invariants for person re-identification. We applied these methods on two sports datasets - a volleyball sequence and a soccer sequence, and on one pedestrian dataset - a video shot in the lab at EPFL. Based on the accuracies calculated, we concluded that the signatures used in the color invariants method produce the best re- sults, since they are parts-based signatures or signatures which take spacial information into account. The dominant colors and color histograms methods do not work very well, since they are holistic approaches. Usually, the color histograms approach gives better results than the dominant colors approach, however, dominant colors can work better in a situa- tion where the illumination changes are not much and consistent dominant colors can be obtained to describe a person or team. , like in the soccer dataset. Person re-identification based on appearance is a challenging problem, which works better if the clothes worn by the different people to be re-identified are very different in color and/or texture. In future, it would be interesting to see if superpixels can be used to divide the person into meaningful parts which can then be matched for person re-identification. Finally, the reports ends with acknowledgement and a synopsis of my personal experience

    Modelling public sentiment in Twitter

    No full text
    People often use social media as an outlet for their emotions and opinions. Analysing social media text to extract sentiment can help reveal the thoughts and opinions people have about the world they live in. This thesis contributes to the field of Sentiment Analysis, which aims to understand how people convey sentiment in order to ultimately deduce their emotions and opinions. While several sentiment classification methods have been devised, the increasing magnitude and complexity of social data calls for scrutiny and advancement of these methods. The scope of this project is to improve traditional supervised learning methods for Twitter polarity detection by using rule-based classifiers, linguistic patterns, and common-sense knowledge based information.This thesis begins by introducing some terminologies and challenges pertaining to sentiment analysis, followed by sub-tasks or goals of sentiment analysis and a survey of commonly used approaches. In the first phase of the project, we propose a sentiment analysis system that combines a rule-based classifier with supervised learning to classify tweets into positive, negative and neutral using the training set provided by SemEval 2015 and test sets provided by SemEval 2013. We find that the average positive and negative f-measure improves by 0.5 units when we add a rule-based classification layer to the supervised learning classifier. This demonstrates that combining high-confidence linguistic rules with supervised learning can improve classification. In the second phase of this project, we extend our work further by proposing a sentiment analysis system that leverages on complex linguistic rules and common-sense based sentic computing resources to enhance supervised learning, and classify tweets into positive and negative. We train our classifier on the training set provided by Sentiment140 and test it on positive and negative tweets from the test sets provided by SemEval 2013 and SemEval 2014. We find that our system achieves an average positive and negative f-measure that is 4.47 units and 3.32 units more than the standard n-grams model for the two datasets respectively. Supervised learning classifiers often misclassify tweets containing conjunctions like "but" and conditionals like "if", due to their special linguistic characteristics. These classifiers also assign a decision score very close to the decision boundary for a large number tweets, which suggests that they are simply unsure instead of being completely wrong about these tweets. The second system proposed in this thesis attempts to enhance supervised classification by countering these two challenges. An online real-time system (http://www.twitter.gelbukh.com/) is also implemented to demonstrate the results obtained, however it is still primitive and a work-in-progress.Bachelor of Engineering (Computer Science

    Creation and animation of a talking head with lip sync and expressions

    No full text
    A Talking Head is an animated model of a human head with synchronized lip movements and expressions. Back in the 1970s, Talking Heads were a major breakthrough in Human Computer Interaction. Ever since, they've been used as conversational agents in offline and web-based applications and as a teacher tool to enhance learning in students. Animated talking guides, like the ones used in MS Office demonstate a friendly and effective way to assist inexperienced users. Though, this topic has already been explored, there are certain linguistic challenges which impede the creation of an accurate, realistic and expressive Talking Head. With further advancements, it may be possible to generate accurate Talking Heads which can be lipread by the hearing impaired. Moreover, recent research shows that emotionally expressive avatars facilitate learning in people with autism spectrum disorder. [2nd Award

    Speaking out of turn: How video conferencing reduces vocal synchrony and collective intelligence

    No full text
    This project examines nonverbal synchrony in distributed collaboratio

    Speaking out of turn: How video conferencing reduces vocal synchrony and collective intelligence.

    No full text
    Collective intelligence (CI) is the ability of a group to solve a wide range of problems. Synchrony in nonverbal cues is critically important to the development of CI; however, extant findings are mostly based on studies conducted face-to-face. Given how much collaboration takes place via the internet, does nonverbal synchrony still matter and can it be achieved when collaborators are physically separated? Here, we hypothesize and test the effect of nonverbal synchrony on CI that develops through visual and audio cues in physically-separated teammates. We show that, contrary to popular belief, the presence of visual cues surprisingly has no effect on CI; furthermore, teams without visual cues are more successful in synchronizing their vocal cues and speaking turns, and when they do so, they have higher CI. Our findings show that nonverbal synchrony is important in distributed collaboration and call into question the necessity of video support

    Objective Measurement of Hyperactivity Using Mobile Sensing and Machine Learning: Pilot Study

    No full text
    BackgroundAlthough hyperactivity is a core symptom of attention-deficit/hyperactivity disorder (ADHD), there are no objective measures that are widely used in clinical settings. ObjectiveWe describe the development of a smartwatch app to measure hyperactivity in school-age children. The LemurDx prototype is a software system for smartwatches that uses wearable sensor technology and machine learning to measure hyperactivity. The goal is to differentiate children with ADHD combined presentation (a combination of inattentive and hyperactive/impulsive presentations) or predominantly hyperactive/impulsive presentation from children with typical levels of activity. MethodsIn this pilot study, we recruited 30 children, aged 6 to 11 years, to wear a smartwatch with the LemurDx app for 2 days. Parents also provided activity labels for 30-minute intervals to help train the algorithm. Half of the participants had ADHD combined presentation or predominantly hyperactive/impulsive presentation (n=15), and half were in the healthy control group (n=15). ResultsThe results indicated high usability scores and an overall diagnostic accuracy of 0.89 (sensitivity=0.93; specificity=0.86) when the motion sensor output was paired with the activity labels. ConclusionsState-of-the-art sensors and machine learning may provide a promising avenue for the objective measurement of hyperactivity
    corecore