18,880 research outputs found
Computational Approaches to Measuring the Similarity of Short Contexts : A Review of Applications and Methods
Measuring the similarity of short written contexts is a fundamental problem
in Natural Language Processing. This article provides a unifying framework by
which short context problems can be categorized both by their intended
application and proposed solution. The goal is to show that various problems
and methodologies that appear quite different on the surface are in fact very
closely related. The axes by which these categorizations are made include the
format of the contexts (headed versus headless), the way in which the contexts
are to be measured (first-order versus second-order similarity), and the
information used to represent the features in the contexts (micro versus macro
views). The unifying thread that binds together many short context applications
and methods is the fact that similarity decisions must be made between contexts
that share few (if any) words in common.Comment: 23 page
Harnessing Collaborative Technologies: Helping Funders Work Together Better
This report was produced through a joint research project of the Monitor Institute and the Foundation Center. The research included an extensive literature review on collaboration in philanthropy, detailed analysis of trends from a recent Foundation Center survey of the largest U.S. foundations, interviews with 37 leading philanthropy professionals and technology experts, and a review of over 170 online tools.The report is a story about how new tools are changing the way funders collaborate. It includes three primary sections: an introduction to emerging technologies and the changing context for philanthropic collaboration; an overview of collaborative needs and tools; and recommendations for improving the collaborative technology landscapeA "Key Findings" executive summary serves as a companion piece to this full report
Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure
Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.Comment: 59 page
The personality systems framework: Current theory and development
The personality systems framework is a fieldwide outline for organizing the contemporary science of personality. I examine the theoretical impact of systems thinking on the discipline and, drawing on ideas from general systems theory, argue that personality psychologists understand individuals’ personalities by studying four topics: (a) personality’s definition, (b) personality’s parts (e.g., traits, schemas, etc.), (c) its organization and (d) development. This framework draws on theories from the field to create a global view of personality including its position and major areas of function. The global view gives rise to new theories such as personal intelligence—the idea that people guide themselves with a broad intelligence they use to reason about personalities
Transforming Graph Representations for Statistical Relational Learning
Relational data representations have become an increasingly important topic
due to the recent proliferation of network datasets (e.g., social, biological,
information networks) and a corresponding increase in the application of
statistical relational learning (SRL) algorithms to these domains. In this
article, we examine a range of representation issues for graph-based relational
data. Since the choice of relational data representation for the nodes, links,
and features can dramatically affect the capabilities of SRL algorithms, we
survey approaches and opportunities for relational representation
transformation designed to improve the performance of these algorithms. This
leads us to introduce an intuitive taxonomy for data representation
transformations in relational domains that incorporates link transformation and
node transformation as symmetric representation tasks. In particular, the
transformation tasks for both nodes and links include (i) predicting their
existence, (ii) predicting their label or type, (iii) estimating their weight
or importance, and (iv) systematically constructing their relevant features. We
motivate our taxonomy through detailed examples and use it to survey and
compare competing approaches for each of these tasks. We also discuss general
conditions for transforming links, nodes, and features. Finally, we highlight
challenges that remain to be addressed
The best of both worlds: highlighting the synergies of combining manual and automatic knowledge organization methods to improve information search and discovery.
Research suggests organizations across all sectors waste a significant amount of time looking for information and often fail to leverage the information they have. In response, many organizations have deployed some form of enterprise search to improve the 'findability' of information. Debates persist as to whether thesauri and manual indexing or automated machine learning techniques should be used to enhance discovery of information. In addition, the extent to which a knowledge organization system (KOS) enhances discoveries or indeed blinds us to new ones remains a moot point. The oil and gas industry was used as a case study using a representative organization. Drawing on prior research, a theoretical model is presented which aims to overcome the shortcomings of each approach. This synergistic model could help to re-conceptualize the 'manual' versus 'automatic' debate in many enterprises, accommodating a broader range of information needs. This may enable enterprises to develop more effective information and knowledge management strategies and ease the tension between what arc often perceived as mutually exclusive competing approaches. Certain aspects of the theoretical model may be transferable to other industries, which is an area for further research
Planning Technologies for the Web Environment: Perspectives and Research Issues
This work will explore and motivate perspectives and research issues related with the applications
of automated planning technologies in order to support innovative web applications. The target for the
technology transfer, i.e. the web, and, in a broader sense, the new Information Technologies (IT) is one of the
most changing, evolving and hottest areas of current computer science. Nevertheless many sub-area in this
field could have potential benefits from Planning and Scheduling (P&S) technologies, and, in some cases,
technology transfer has already started. This paper will consider and explore a set of topics, guidelines and
objectives in order to implement the technology transfer a new challenges, requirements and research issues
for planning which emerge from the web and IT industry.
Sample scenarios will be depicted to clarify the potential applications and limits of current planning
technology. Finally we will point out some new P&S research challenge issues which are required to meet
more advanced applicative goals
- …