448 research outputs found

    Automatic event-level textual emotion sensing using mutual action histogram between entities

    Get PDF
    International audienceAutomatic emotion sensing in textual data is crucial for the development of intelligent interfaces in many interactive computer applications. This paper describes a high-precision, knowledgebase-independent approach for automatic emotion sensing for the subjects of events embedded within sentences. The proposed approach is based on the probability distribution of common mutual actions between the subject and the object of an event. We have incorporated web-based text mining and semantic role labeling techniques, together with a number of reference entity pairs and hand-crafted emotion generation rules to realize an event emotion detection system. The evaluation outcome reveals a satisfactory result with about 85% accuracy for detecting the positive, negative and neutral emotions

    Sentiment Mining on Products Features based on Part of Speech Tagging Approach

    Get PDF
    Abstract In today's competitive business, paying attention to the feedback from customers has become a valuable factor for organizations. Organizations have found that satisfied customers are not only a repeated buyer, they are also propaganda arm of the organization. Therefore, the correct analysis of their feedback by relying on information technology tools is a key element in the success of the organizations in trade. People generally share their opinions about purchased goods on the Web sites or in social networks. Extraction of these opinions is known as a special branch of text mining under the term of sentiment mining. Although this category is brand new, but in recent years, extensive researches have been done on sentiment analysis and classification of intentions. Therefore, in this paper a model is suggested about sentiment mining with the ability to extract users' opinion and product features. So dataset of customer comments has been made in a way that the comments are taken from a Website about some specific digital products. Then the paragraphed opinions are converted into sentences and the sentences are separated into two categories of subjective and objective. Next, user's opinion and product features are taken from subjective sentences by using StanfordPOStagger and relying on Tf-idf factor for product features and finding opinion polarity by using SentiWordNet tools. In this way, user satisfaction of specific features of the product can be detected. As a means of evaluation, three factors of Recall, Precision and F-Measure provide an indication of the accuracy of each part of this research

    ROZPOZNAWANIE EMOCJI W TEKSTACH POLSKOJĘZYCZNYCH Z WYKORZYSTANIEM METODY SƁÓW KLUCZOWYCH

    Get PDF
    Dynamic development of social networks caused that the Internet has become the most popular communication medium. A vast majority of the messages are exchanged in text format and very often reflect authors’ emotional states. Detection of the emotions in text is widely used in e-commerce or telemedicine becoming the milestone in the field of human-computer interaction. The paper presents a method of emotion recognition in Polish-language texts based on the keywords detection algorithm with lemmatization. The obtained accuracy is about 60%. The first Polish-language database of keywords expressing emotions has been also developed.Dynamiczny rozwĂłj sieci spoƂecznoƛciowych sprawiƂ, ĆŒe Internet staƂ się najpopularniejszym medium komunikacyjnym. Zdecydowana większoƛć komunikatĂłw wymieniana jest w postaci widomoƛci tekstowych, ktĂłre niejednokrotnie odzwierciedlają stan emocjonalny autora. Identyfikacja emocji w tekstach znajduje szerokie zastosowanie w handlu elektronicznym, czy telemedycynie, stając się jednoczeƛnie waĆŒnym elementem w komunikacji czƂowiek-komputer. W niniejszym artykule zaprezentowano metodę rozpoznawania emocji w tekstach polskojęzycznych opartą o algorytm detekcji sƂów kluczowych i lematyzację. Uzyskano dokƂadnoƛć rzędu 60%. Opracowano rĂłwnieĆŒ pierwszą polskojęzyczną bazę sƂów kluczowych wyraĆŒających emocje

    24th International Conference on Information Modelling and Knowledge Bases

    Get PDF
    In the last three decades information modelling and knowledge bases have become essentially important subjects not only in academic communities related to information systems and computer science but also in the business area where information technology is applied. The series of European – Japanese Conference on Information Modelling and Knowledge Bases (EJC) originally started as a co-operation initiative between Japan and Finland in 1982. The practical operations were then organised by professor Ohsuga in Japan and professors Hannu Kangassalo and Hannu Jaakkola in Finland (Nordic countries). Geographical scope has expanded to cover Europe and also other countries. Workshop characteristic - discussion, enough time for presentations and limited number of participants (50) / papers (30) - is typical for the conference. Suggested topics include, but are not limited to: 1. Conceptual modelling: Modelling and specification languages; Domain-specific conceptual modelling; Concepts, concept theories and ontologies; Conceptual modelling of large and heterogeneous systems; Conceptual modelling of spatial, temporal and biological data; Methods for developing, validating and communicating conceptual models. 2. Knowledge and information modelling and discovery: Knowledge discovery, knowledge representation and knowledge management; Advanced data mining and analysis methods; Conceptions of knowledge and information; Modelling information requirements; Intelligent information systems; Information recognition and information modelling. 3. Linguistic modelling: Models of HCI; Information delivery to users; Intelligent informal querying; Linguistic foundation of information and knowledge; Fuzzy linguistic models; Philosophical and linguistic foundations of conceptual models. 4. Cross-cultural communication and social computing: Cross-cultural support systems; Integration, evolution and migration of systems; Collaborative societies; Multicultural web-based software systems; Intercultural collaboration and support systems; Social computing, behavioral modeling and prediction. 5. Environmental modelling and engineering: Environmental information systems (architecture); Spatial, temporal and observational information systems; Large-scale environmental systems; Collaborative knowledge base systems; Agent concepts and conceptualisation; Hazard prediction, prevention and steering systems. 6. Multimedia data modelling and systems: Modelling multimedia information and knowledge; Contentbased multimedia data management; Content-based multimedia retrieval; Privacy and context enhancing technologies; Semantics and pragmatics of multimedia data; Metadata for multimedia information systems. Overall we received 56 submissions. After careful evaluation, 16 papers have been selected as long paper, 17 papers as short papers, 5 papers as position papers, and 3 papers for presentation of perspective challenges. We thank all colleagues for their support of this issue of the EJC conference, especially the program committee, the organising committee, and the programme coordination team. The long and the short papers presented in the conference are revised after the conference and published in the Series of “Frontiers in Artificial Intelligence” by IOS Press (Amsterdam). The books “Information Modelling and Knowledge Bases” are edited by the Editing Committee of the conference. We believe that the conference will be productive and fruitful in the advance of research and application of information modelling and knowledge bases. Bernhard Thalheim Hannu Jaakkola Yasushi Kiyok

    Controversy trend detection in social media

    Get PDF
    In this research, we focus on the early prediction of whether topics are likely to generate significant controversy (in the form of social media such as comments, blogs, etc.). Controversy trend detection is important to companies, governments, national security agencies, and marketing groups because it can be used to identify which issues the public is having problems with and develop strategies to remedy them. For example, companies can monitor their press release to find out how the public is reacting and to decide if any additional public relations action is required, social media moderators can moderate discussions if the discussions start becoming abusive and getting out of control, and governmental agencies can monitor their public policies and make adjustments to the policies to address any public concerns. An algorithm was developed to predict controversy trends by taking into account sentiment expressed in comments, burstiness of comments, and controversy score. To train and test the algorithm, an annotated corpus was developed consisting of 728 news articles and over 500,000 comments on these articles made by viewers from CNN.com. This study achieved an average F-score of 71.3% across all time spans in detection of controversial versus non-controversial topics. The results suggest that it is possible for early prediction of controversy trends leveraging social media

    Music as complex emergent behaviour : an approach to interactive music systems

    Get PDF
    Access to the full-text thesis is no longer available at the author's request, due to 3rd party copyright restrictions. Access removed on 28.11.2016 by CS (TIS).Metadata merged with duplicate record (http://hdl.handle.net/10026.1/770) on 20.12.2016 by CS (TIS).This is a digitised version of a thesis that was deposited in the University Library. If you are the author please contact PEARL Admin ([email protected]) to discuss options.This thesis suggests a new model of human-machine interaction in the domain of non-idiomatic musical improvisation. Musical results are viewed as emergent phenomena issuing from complex internal systems behaviour in relation to input from a single human performer. We investigate the prospect of rewarding interaction whereby a system modifies itself in coherent though non-trivial ways as a result of exposure to a human interactor. In addition, we explore whether such interactions can be sustained over extended time spans. These objectives translate into four criteria for evaluation; maximisation of human influence, blending of human and machine influence in the creation of machine responses, the maintenance of independent machine motivations in order to support machine autonomy and finally, a combination of global emergent behaviour and variable behaviour in the long run. Our implementation is heavily inspired by ideas and engineering approaches from the discipline of Artificial Life. However, we also address a collection of representative existing systems from the field of interactive composing, some of which are implemented using techniques of conventional Artificial Intelligence. All systems serve as a contextual background and comparative framework helping the assessment of the work reported here. This thesis advocates a networked model incorporating functionality for listening, playing and the synthesis of machine motivations. The latter incorporate dynamic relationships instructing the machine to either integrate with a musical context suggested by the human performer or, in contrast, perform as an individual musical character irrespective of context. Techniques of evolutionary computing are used to optimise system components over time. Evolution proceeds based on an implicit fitness measure; the melodic distance between consecutive musical statements made by human and machine in relation to the currently prevailing machine motivation. A substantial number of systematic experiments reveal complex emergent behaviour inside and between the various systems modules. Music scores document how global systems behaviour is rendered into actual musical output. The concluding chapter offers evidence of how the research criteria were accomplished and proposes recommendations for future research

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Resolving pronominal anaphora using commonsense knowledge

    Get PDF
    Coreference resolution is the task of resolving all expressions in a text that refer to the same entity. Such expressions are often used in writing and speech as shortcuts to avoid repetition. The most frequent form of coreference is the anaphor. To resolve anaphora not only grammatical and syntactical strategies are required, but also semantic approaches should be taken into consideration. This dissertation presents a framework for automatically resolving pronominal anaphora by integrating recent findings from the field of linguistics with new semantic features. Commonsense knowledge is the routine knowledge people have of the everyday world. Because such knowledge is widely used it is frequently omitted from social communications such as texts. It is understandable that without this knowledge computers will have difficulty making sense of textual information. In this dissertation a new set of computational and linguistic features are used in a supervised learning approach to resolve the pronominal anaphora in document. Commonsense knowledge sources such as ConceptNet and WordNet are used and similarity measures are extracted to uncover the elaborative information embedded in the words that can help in the process of anaphora resolution. The anaphoric system is tested on 350 Wall Street Journal articles from the BBN corpus. When compared with other systems available such as BART (Versley et al. 2008) and Charniak and Elsner 2009, our system performed better and also resolved a much wider range of anaphora. We were able to achieve a 92% F-measure on the BBN corpus and an average of 85% F-measure when tested on other genres of documents such as children stories and short stories selected from the web
    • 

    corecore