47,910 research outputs found

    Crowd Opinion Mining And Scoring

    Get PDF
    A system and method are disclosed for mining and rating one or more crowd opinions. The system uses a machine learning approach for crowd opinion mining and scoring. The machine learning algorithm creates and updates concepts of the target query in the server by simultaneously mining the web to update opinion scores. A search interface is provided to find concepts and opinions on the targets. Based on the search, the system retrieves the target-related opinions from the server. The system sends crowd-sourced opinions or answers to users. Crowd knowledge is utilized to find opinions and the scores are displayed in a central place. Biasing of community-based opinions is mitigated

    Web Mining for Social Network Analysis:A Review, Direction and Future Vision.

    Get PDF
    Although web is rich in data, gathering this data and making sense of this data is extremely difficult due to its unorganised nature. Therefore existing Data Mining techniques can be applied toextract information from the web data. The knowledge thus extracted can also be used for Analysis of Social Networks and Online Communities. This paper gives a brief insight to Web Mining and Link Analysis used in Social Network Analysis and reveals the algorithms such as HITS, PAGERANK, SALSA, PHITS, CLEVER and INDEGREE which gives a measure to identify Online Communities over Social Networks. The most common amongst these algorithms are PageRank and HITS. PageRank measures the importance of a page efficiently with the help of inlinks in less time, while HITS uses both inlinks and outlinks to measure the importance of a web page and is sensitive to user query. Further various extensions to these algorithms also exist to refine the query based search results. It opens many doors for future researches to find undiscovered knowledge of existing online communities over various social networks.Keywords:Web Structure Mining, Link Analysis, Link Mining, Online Community Minin

    Learning Deep Visual Object Models From Noisy Web Data: How to Make it Work

    Full text link
    Deep networks thrive when trained on large scale data collections. This has given ImageNet a central role in the development of deep architectures for visual object classification. However, ImageNet was created during a specific period in time, and as such it is prone to aging, as well as dataset bias issues. Moving beyond fixed training datasets will lead to more robust visual systems, especially when deployed on robots in new environments which must train on the objects they encounter there. To make this possible, it is important to break free from the need for manual annotators. Recent work has begun to investigate how to use the massive amount of images available on the Web in place of manual image annotations. We contribute to this research thread with two findings: (1) a study correlating a given level of noisily labels to the expected drop in accuracy, for two deep architectures, on two different types of noise, that clearly identifies GoogLeNet as a suitable architecture for learning from Web data; (2) a recipe for the creation of Web datasets with minimal noise and maximum visual variability, based on a visual and natural language processing concept expansion strategy. By combining these two results, we obtain a method for learning powerful deep object models automatically from the Web. We confirm the effectiveness of our approach through object categorization experiments using our Web-derived version of ImageNet on a popular robot vision benchmark database, and on a lifelong object discovery task on a mobile robot.Comment: 8 pages, 7 figures, 3 table

    Social Search with Missing Data: Which Ranking Algorithm?

    Get PDF
    Online social networking tools are extremely popular, but can miss potential discoveries latent in the social 'fabric'. Matchmaking services which can do naive profile matching with old database technology are too brittle in the absence of key data, and even modern ontological markup, though powerful, can be onerous at data-input time. In this paper, we present a system called BuddyFinder which can automatically identify buddies who can best match a user's search requirements specified in a term-based query, even in the absence of stored user-profiles. We deploy and compare five statistical measures, namely, our own CORDER, mutual information (MI), phi-squared, improved MI and Z score, and two TF/IDF based baseline methods to find online users who best match the search requirements based on 'inferred profiles' of these users in the form of scavenged web pages. These measures identify statistically significant relationships between online users and a term-based query. Our user evaluation on two groups of users shows that BuddyFinder can find users highly relevant to search queries, and that CORDER achieved the best average ranking correlations among all seven algorithms and improved the performance of both baseline methods
    • …
    corecore