116,190 research outputs found
Datamining for Web-Enabled Electronic Business Applications
Web-Enabled Electronic Business is generating massive amount of data on customer purchases, browsing patterns, usage times and preferences at an increasing rate. Data mining techniques can be applied to all the data being collected for obtaining useful information. This chapter attempts to present issues associated with data mining for web-enabled electronic-business
Science Concierge: A fast content-based recommendation system for scientific publications
Finding relevant publications is important for scientists who have to cope
with exponentially increasing numbers of scholarly material. Algorithms can
help with this task as they help for music, movie, and product recommendations.
However, we know little about the performance of these algorithms with
scholarly material. Here, we develop an algorithm, and an accompanying Python
library, that implements a recommendation system based on the content of
articles. Design principles are to adapt to new content, provide near-real time
suggestions, and be open source. We tested the library on 15K posters from the
Society of Neuroscience Conference 2015. Human curated topics are used to cross
validate parameters in the algorithm and produce a similarity metric that
maximally correlates with human judgments. We show that our algorithm
significantly outperformed suggestions based on keywords. The work presented
here promises to make the exploration of scholarly material faster and more
accurate.Comment: 12 pages, 5 figure
Structure and Dynamics of Information Pathways in Online Media
Diffusion of information, spread of rumors and infectious diseases are all
instances of stochastic processes that occur over the edges of an underlying
network. Many times networks over which contagions spread are unobserved, and
such networks are often dynamic and change over time. In this paper, we
investigate the problem of inferring dynamic networks based on information
diffusion data. We assume there is an unobserved dynamic network that changes
over time, while we observe the results of a dynamic process spreading over the
edges of the network. The task then is to infer the edges and the dynamics of
the underlying network.
We develop an on-line algorithm that relies on stochastic convex optimization
to efficiently solve the dynamic network inference problem. We apply our
algorithm to information diffusion among 3.3 million mainstream media and blog
sites and experiment with more than 179 million different pieces of information
spreading over the network in a one year period. We study the evolution of
information pathways in the online media space and find interesting insights.
Information pathways for general recurrent topics are more stable across time
than for on-going news events. Clusters of news media sites and blogs often
emerge and vanish in matter of days for on-going news events. Major social
movements and events involving civil population, such as the Libyan's civil war
or Syria's uprise, lead to an increased amount of information pathways among
blogs as well as in the overall increase in the network centrality of blogs and
social media sites.Comment: To Appear at the 6th International Conference on Web Search and Data
Mining (WSDM '13
Random Indexing K-tree
Random Indexing (RI) K-tree is the combination of two algorithms for
clustering. Many large scale problems exist in document clustering. RI K-tree
scales well with large inputs due to its low complexity. It also exhibits
features that are useful for managing a changing collection. Furthermore, it
solves previous issues with sparse document vectors when using K-tree. The
algorithms and data structures are defined, explained and motivated. Specific
modifications to K-tree are made for use with RI. Experiments have been
executed to measure quality. The results indicate that RI K-tree improves
document cluster quality over the original K-tree algorithm.Comment: 8 pages, ADCS 2009; Hyperref and cleveref LaTeX packages conflicted.
Removed clevere
Market basket analysis of library circulation data
âMarket Basket Analysisâ algorithms have recently seen widespread use in analyzing consumer purchasing patterns-specifically, in detecting products that are frequently purchased together. We apply the Apriori market basket analysis tool to the task of detecting subject classification categories that co-occur in transaction records of book borrowed form a university library. This information can be useful in directing users to additional portions of the collection that may contain documents relevant to their information need, and in determining a libraryâs physical layout. These results can also provide insight into the degree of âscatterâ that the classification scheme induces in a particular collection of documents
Discovery-led refinement in e-discovery investigations: sensemaking, cognitive ergonomics and system design.
Given the very large numbers of documents involved in e-discovery investigations, lawyers face a considerable challenge of collaborative sensemaking. We report findings from three workplace studies which looked at different aspects of how this challenge was met. From a sociotechnical perspective, the studies aimed to understand how investigators collectively and individually worked with information to support sensemaking and decision making. Here, we focus on discovery-led refinement; specifically, how engaging with the materials of the investigations led to discoveries that supported refinement of the problems and new strategies for addressing them. These refinements were essential for tractability. We begin with observations which show how new lines of enquiry were recursively embedded. We then analyse the conceptual structure of a line of enquiry and consider how reflecting this in e-discovery support systems might support scalability and group collaboration. We then focus on the individual activity of manual document review where refinement corresponded with the inductive identification of classes of irrelevant and relevant documents within a collection. Our observations point to the effects of priming on dealing with these efficiently and to issues of cognitive ergonomics at the humanâcomputer interface. We use these observations to introduce visualisations that might enable reviewers to deal with such refinements more efficiently
Topic Similarity Networks: Visual Analytics for Large Document Sets
We investigate ways in which to improve the interpretability of LDA topic
models by better analyzing and visualizing their outputs. We focus on examining
what we refer to as topic similarity networks: graphs in which nodes represent
latent topics in text collections and links represent similarity among topics.
We describe efficient and effective approaches to both building and labeling
such networks. Visualizations of topic models based on these networks are shown
to be a powerful means of exploring, characterizing, and summarizing large
collections of unstructured text documents. They help to "tease out"
non-obvious connections among different sets of documents and provide insights
into how topics form larger themes. We demonstrate the efficacy and
practicality of these approaches through two case studies: 1) NSF grants for
basic research spanning a 14 year period and 2) the entire English portion of
Wikipedia.Comment: 9 pages; 2014 IEEE International Conference on Big Data (IEEE BigData
2014
- âŠ