3,845 research outputs found

    Contextual Object Detection with a Few Relevant Neighbors

    Full text link
    A natural way to improve the detection of objects is to consider the contextual constraints imposed by the detection of additional objects in a given scene. In this work, we exploit the spatial relations between objects in order to improve detection capacity, as well as analyze various properties of the contextual object detection problem. To precisely calculate context-based probabilities of objects, we developed a model that examines the interactions between objects in an exact probabilistic setting, in contrast to previous methods that typically utilize approximations based on pairwise interactions. Such a scheme is facilitated by the realistic assumption that the existence of an object in any given location is influenced by only few informative locations in space. Based on this assumption, we suggest a method for identifying these relevant locations and integrating them into a mostly exact calculation of probability based on their raw detector responses. This scheme is shown to improve detection results and provides unique insights about the process of contextual inference for object detection. We show that it is generally difficult to learn that a particular object reduces the probability of another, and that in cases when the context and detector strongly disagree this learning becomes virtually impossible for the purposes of improving the results of an object detector. Finally, we demonstrate improved detection results through use of our approach as applied to the PASCAL VOC and COCO datasets

    Identification of Consumer Adverse Drug Reaction Messages on Social Media

    Get PDF
    The prevalence of social media has resulted in spikes of data on the Internet which can have potential use to assist in many aspects of human life. One prospective use of the data is in the development of an early warning system to monitor consumer Adverse Drug Reactions (ADRs). The direct reporting of ADRs by consumers is playing an increasingly important role in the world of pharmacovigilance. Social media provides patients a platform to exchange their experiences regarding the use of certain drugs. However, the messages posted on those social media networks contain both ADR related messages (positive examples) and non-ADR related messages (negative examples). In this paper, we integrate text mining and partially supervised learning methods to automatically extract and classify messages posted on social media networks into positive and negative examples. Our findings can provide managerial insights into how social media analytics can improve not only postmarketing surveillance, but also other problem domains where large quantity of user-generated content is available

    Environmental Scanning for Customer Complaint Identification in Social Media

    Get PDF
    Social media provides a platform for dissatisfied and frustrated customers to discuss matters of common concerns and share experiences about products and services. While listening to and learning from customer has long been recognized as an important marketing charge, how to identify customer complaints on social media is a nontrivial task. Customer complaint messages are highly distributed on social media, while non-complaint messages are unspecific and topically diverse. It is costly and time consuming to manually label a large number of customer complaint messages (positive examples) and non-complaint messages (negative examples) for training classification systems. Nevertheless, it is relatively easy to obtain large volumes of unlabeled content on social media. In this paper, we propose a partially supervised learning approach to automatically extract high quality positive and negative examples from an unlabeled dataset. The empirical evaluation suggested that the proposed approach generally outperforms the benchmark techniques and exhibits more stable performance

    An Approach for Optimal Feature Subset Selection using a New Term Weighting Scheme and Mutual Information

    Get PDF
    With the development of the web, large numbers of documents are available on the Internet and they are growing drastically day by day. Hence automatic text categorization becomes more and more important for dealing with massive data. However the major problem of document categorization is the high dimensionality of feature space.  The measures to decrease the feature dimension under not decreasing recognition effect are called the problems of feature optimum extraction or selection. Dealing with reduced relevant feature set can be more efficient and effective. The objective of feature selection is to find a subset of features that have all characteristics of the full features set. Instead Dependency among features is also important for classification. During past years, various metrics have been proposed to measure the dependency among different features. A popular approach to realize dependency is maximal relevance feature selection: selecting the features with the highest relevance to the target class. A new feature weighting scheme, we proposed have got a tremendous improvements in dimensionality reduction of the feature space. The experimental results clearly show that this integrated method works far better than the others

    Proceedings of the 2nd Computer Science Student Workshop: Microsoft Istanbul, Turkey, April 9, 2011

    Get PDF

    Improving average ranking precision in user searches for biomedical research datasets

    Full text link
    Availability of research datasets is keystone for health and life science study reproducibility and scientific progress. Due to the heterogeneity and complexity of these data, a main challenge to be overcome by research data management systems is to provide users with the best answers for their search queries. In the context of the 2016 bioCADDIE Dataset Retrieval Challenge, we investigate a novel ranking pipeline to improve the search of datasets used in biomedical experiments. Our system comprises a query expansion model based on word embeddings, a similarity measure algorithm that takes into consideration the relevance of the query terms, and a dataset categorisation method that boosts the rank of datasets matching query constraints. The system was evaluated using a corpus with 800k datasets and 21 annotated user queries. Our system provides competitive results when compared to the other challenge participants. In the official run, it achieved the highest infAP among the participants, being +22.3% higher than the median infAP of the participant's best submissions. Overall, it is ranked at top 2 if an aggregated metric using the best official measures per participant is considered. The query expansion method showed positive impact on the system's performance increasing our baseline up to +5.0% and +3.4% for the infAP and infNDCG metrics, respectively. Our similarity measure algorithm seems to be robust, in particular compared to Divergence From Randomness framework, having smaller performance variations under different training conditions. Finally, the result categorization did not have significant impact on the system's performance. We believe that our solution could be used to enhance biomedical dataset management systems. In particular, the use of data driven query expansion methods could be an alternative to the complexity of biomedical terminologies
    corecore