522 research outputs found

    YouTube AV 50K: An Annotated Corpus for Comments in Autonomous Vehicles

    Full text link
    With one billion monthly viewers, and millions of users discussing and sharing opinions, comments below YouTube videos are rich sources of data for opinion mining and sentiment analysis. We introduce the YouTube AV 50K dataset, a freely-available collections of more than 50,000 YouTube comments and metadata below autonomous vehicle (AV)-related videos. We describe its creation process, its content and data format, and discuss its possible usages. Especially, we do a case study of the first self-driving car fatality to evaluate the dataset, and show how we can use this dataset to better understand public attitudes toward self-driving cars and public reactions to the accident. Future developments of the dataset are also discussed.Comment: in Proceedings of the Thirteenth International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2018

    Toward a Robust Diversity-Based Model to Detect Changes of Context

    Get PDF
    Being able to automatically and quickly understand the user context during a session is a main issue for recommender systems. As a first step toward achieving that goal, we propose a model that observes in real time the diversity brought by each item relatively to a short sequence of consultations, corresponding to the recent user history. Our model has a complexity in constant time, and is generic since it can apply to any type of items within an online service (e.g. profiles, products, music tracks) and any application domain (e-commerce, social network, music streaming), as long as we have partial item descriptions. The observation of the diversity level over time allows us to detect implicit changes. In the long term, we plan to characterize the context, i.e. to find common features among a contiguous sub-sequence of items between two changes of context determined by our model. This will allow us to make context-aware and privacy-preserving recommendations, to explain them to users. As this is an ongoing research, the first step consists here in studying the robustness of our model while detecting changes of context. In order to do so, we use a music corpus of 100 users and more than 210,000 consultations (number of songs played in the global history). We validate the relevancy of our detections by finding connections between changes of context and events, such as ends of session. Of course, these events are a subset of the possible changes of context, since there might be several contexts within a session. We altered the quality of our corpus in several manners, so as to test the performances of our model when confronted with sparsity and different types of items. The results show that our model is robust and constitutes a promising approach.Comment: 27th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2015), Nov 2015, Vietri sul Mare, Ital

    Searching, Selecting, and Synthesizing Source Code Components

    Get PDF
    As programmers develop software, they instinctively sense that source code exists that could be reused if found --- many programming tasks are common to many software projects across different domains. oftentimes, a programmer will attempt to create new software from this existing source code, such as third-party libraries or code from online repositories. Unfortunately, several major challenges make it difficult to locate the relevant source code and to reuse it. First, there is a fundamental mismatch between the high-level intent reflected in the descriptions of source code, and the low-level implementation details. This mismatch is known as the concept assignment problem , and refers to the frequent case when the keywords from comments or identifiers in code do not match the features implemented in the code. Second, even if relevant source code is found, programmers must invest significant intellectual effort into understanding how to reuse the different functions, classes, or other components present in the source code. These components may be specific to a particular application, and difficult to reuse.;One key source of information that programmers use to understand source code is the set of relationships among the source code components. These relationships are typically structural data, such as function calls or class instantiations. This structural data has been repeatedly suggested as an alternative to textual analysis for search and reuse, however as yet no comprehensive strategy exists for locating relevant and reusable source code. In my research program, I harness this structural data in a unified approach to creating and evolving software from existing components. For locating relevant source code, I present a search engine for finding applications based on the underlying Application Programming Interface (API) calls, and a technique for finding chains of relevant function invocations from repositories of millions of lines of code. Next, for reusing source code, I introduce a system to facilitate building software prototypes from existing packages, and an approach to detecting similar software applications

    Personalized Search

    Full text link
    As the volume of electronically available information grows, relevant items become harder to find. This work presents an approach to personalizing search results in scientific publication databases. This work focuses on re-ranking search results from existing search engines like Solr or ElasticSearch. This work also includes the development of Obelix, a new recommendation system used to re-rank search results. The project was proposed and performed at CERN, using the scientific publications available on the CERN Document Server (CDS). This work experiments with re-ranking using offline and online evaluation of users and documents in CDS. The experiments conclude that the personalized search result outperform both latest first and word similarity in terms of click position in the search result for global search in CDS

    Holistic recommender systems for software engineering

    Get PDF
    The knowledge possessed by developers is often not sufficient to overcome a programming problem. Short of talking to teammates, when available, developers often gather additional knowledge from development artifacts (e.g., project documentation), as well as online resources. The web has become an essential component in the modern developer’s daily life, providing a plethora of information from sources like forums, tutorials, Q&A websites, API documentation, and even video tutorials. Recommender Systems for Software Engineering (RSSE) provide developers with assistance to navigate the information space, automatically suggest useful items, and reduce the time required to locate the needed information. Current RSSEs consider development artifacts as containers of homogeneous information in form of pure text. However, text is a means to represent heterogeneous information provided by, for example, natural language, source code, interchange formats (e.g., XML, JSON), and stack traces. Interpreting the information from a pure textual point of view misses the intrinsic heterogeneity of the artifacts, thus leading to a reductionist approach. We propose the concept of Holistic Recommender Systems for Software Engineering (H-RSSE), i.e., RSSEs that go beyond the textual interpretation of the information contained in development artifacts. Our thesis is that modeling and aggregating information in a holistic fashion enables novel and advanced analyses of development artifacts. To validate our thesis we developed a framework to extract, model and analyze information contained in development artifacts in a reusable meta- information model. We show how RSSEs benefit from a meta-information model, since it enables customized and novel analyses built on top of our framework. The information can be thus reinterpreted from an holistic point of view, preserving its multi-dimensionality, and opening the path towards the concept of holistic recommender systems for software engineering

    An Adapted Approach for User Profiling in a Recommendation System: Application to Industrial Diagnosis

    Get PDF
    In this paper, we propose a global architecture of a recommender tool, which represents a part of an existing collaborative platform. This tool provides diagnostic documents for industrial operators. The recommendation process considered here is composed of three steps: Collecting and filtering information; Prediction or recommendation step; evaluating and improvement. In this work, we focus on collecting and filtering step. We mainly use information result from collaborative sessions and documents describing solutions that are attributed to the complex diagnostic problems. The developed tool is based on collaborative filtering that operates on users' preferences and similar responses
    • …
    corecore