130,407 research outputs found
A CD-ROM Based Agricultural Information Retrieval System
An information retrieval system for agricultural extension was developed using CD-ROM technology as the primary medium for information delivery. Object-oriented database techniques were used to organize the information. Conventional retrieval techniques including hypertext, fulltext searching, and relational databases, and decision support programs such as expert systems were integrated into a complete package for accessing information stored on the CDROM. A multimedia user interface was developed to provide a variety of capabilities including computer graphics and high-resolution digitized images. Information for the disk was gathered and entered using extension publications which were tagged using an SGML-based document markup language. The fully operational CD-ROM system has been implemented in all 67 county extension offices in Flori
Recommended from our members
Window based Enterprise Expert Search
This is the first year for the participation of the City University Centre of Interactive System Research (CISR) in the Expert Search Task. In this paper, we describe an expert search experiment based on window-based techniques, that is, we build profile for each expert by using information around the expertâs name and email address in the documents. We then use the traditional IR techniques to search and rank experts. Our experiment is done on Okapi and BM25 is used as the ranking model. Results show that parameter b does have an effect on the retrieval effectiveness and using a smaller value for b produces better results
Retrieving Ambiguous Sounds Using Perceptual Timbral Attributes in Audio Production Environments
For over an decade, one of the well identified problem within audio production environments is the effective retrieval and management of sound libraries. Most of the self-recorded and commercially produced sound libraries are usually well structured in terms of meta-data and textual descriptions and thus allowing traditional text-based retrieval approaches to obtain satisfiable results. However, traditional information retrieval techniques pose limitations in retrieving ambiguous sound collections (ie. sounds with no identifiable origin, foley sounds, synthesized sound effects, abstract sounds) due to the difficulties in textual descriptions and the complex psychoacoustic nature of the sound. Early psychoacoustical studies propose perceptual acoustical qualities as an effective way of describing these category of sounds [1]. In Music Information Retrieval (MIR) studies, this problem were mostly studied and explored in context of content-based audio retrieval. However, we observed that most of the commercial available systems in the market neither integrated advanced content-based sound descriptions nor the visualization and interface design approaches evolved in the last years.
Our research was mainly aimed to investigate two things; 1. Development of audio retrieval system incorporating high level timbral features as search parameters. 2. Investigate user-centered approach in integrating these features into audio production pipelines using expert-user studies. In this project, We present an prototype which is similar to traditional sound browsers (list-based browsing) with an added functionality of filtering and ranking sounds by perceptual timbral features such as brightness, depth, roughness and hardness. Our main focus was on the retrieval process by timbral features. Inspiring from the recent focus on user-centered systems ([2], [3]) in the MIR community, in-depth interviews and qualitative evaluation of the system were conducted with expert-user in order to identify the underlying problems. Our studies observed the potential applications of high-level perceptual timbral features in audio production pipelines using a probe system and expert-user studies. We also outlined future guidelines and possible improvements to the system from the outcomes of this research
Implementing an Intelligent Retrieval System: The CODER System, Version 1.0
For individuals requiring interactive access to online text, information storage and retrieval systems provide a way to retrieve desired documents and/or text passages. The CODER (COmposite Document Expert/effective/extended Retrieval) system is a testbed for determining how useful various artificial intelligence techniques are for increasing the effectiveness of information storage and retrieval systems. The system, designed previously, has three components: an analysis subsystem for analyzing and storing document contents, a central spine for manipulations and storage of world and domain knowledge, and a retrieval subsystem for matching user queries to relevant documents. This thesis discusses the implementation of the retrieval subsystem and portions of the spine and analysis subsystem. It illustrates that logic programming, specifically with the Prolog language, is suitable for development of an intelligent information retrieval system. Furthermore, it shows that system modularity provides a flexible research testbed, allowing many individuals to work on different parts of the system which may later be quickly integrated. The retrieval subsystem has been implemented in a modular fashion so that new approaches to information can be easily compared to more traditional ones. A powerful knowledge representation language, a comprehensive lexicon, and individually tailored experts using standardized blackboard modules for communication and control allowed rapid prototyping, incremental development and ready adaptability to change. The system executes on a DEC VAX 11/785 running ULTRIX (TM), a variant of 4.2 BSD UNIX. It has been implemented as a set of MU-Prolog and C modules communicating through TCP/IP sockets
Automatic transcription of polyphonic music exploiting temporal evolution
PhDAutomatic music transcription is the process of converting an audio recording
into a symbolic representation using musical notation. It has numerous applications
in music information retrieval, computational musicology, and the
creation of interactive systems. Even for expert musicians, transcribing polyphonic
pieces of music is not a trivial task, and while the problem of automatic
pitch estimation for monophonic signals is considered to be solved, the creation
of an automated system able to transcribe polyphonic music without setting
restrictions on the degree of polyphony and the instrument type still remains
open.
In this thesis, research on automatic transcription is performed by explicitly
incorporating information on the temporal evolution of sounds. First efforts address
the problem by focusing on signal processing techniques and by proposing
audio features utilising temporal characteristics. Techniques for note onset and
offset detection are also utilised for improving transcription performance. Subsequent
approaches propose transcription models based on shift-invariant probabilistic
latent component analysis (SI-PLCA), modeling the temporal evolution
of notes in a multiple-instrument case and supporting frequency modulations in
produced notes. Datasets and annotations for transcription research have also
been created during this work. Proposed systems have been privately as well as
publicly evaluated within the Music Information Retrieval Evaluation eXchange
(MIREX) framework. Proposed systems have been shown to outperform several
state-of-the-art transcription approaches.
Developed techniques have also been employed for other tasks related to music
technology, such as for key modulation detection, temperament estimation,
and automatic piano tutoring. Finally, proposed music transcription models
have also been utilized in a wider context, namely for modeling acoustic scenes
Modeling social information skills
In a modern economy, the most important resource consists in\ud
human talent: competent, knowledgeable people. Locating the right person for\ud
the task is often a prerequisite to complex problem-solving, and experienced\ud
professionals possess the social skills required to find appropriate human\ud
expertise. These skills can be reproduced more and more with specific\ud
computer software, an approach defining the new field of social information\ud
retrieval. We will analyze the social skills involved and show how to model\ud
them on computer. Current methods will be described, notably information\ud
retrieval techniques and social network theory. A generic architecture and its\ud
functions will be outlined and compared with recent work. We will try in this\ud
way to estimate the perspectives of this recent domain
Voting for candidates: adapting data fusion techniques for an expert search task
In an expert search task, the users' need is to identify people who have relevant expertise to a topic of interest. An expert search system predicts and ranks the expertise of a set of candidate persons with respect to the users' query. In this paper, we propose a novel approach for predicting and ranking candidate expertise with respect to a query. We see the problem of ranking experts as a voting problem, which we model by adapting eleven data fusion techniques.We investigate the effectiveness of the voting approach and the associated data fusion techniques across a range of document weighting models, in the context of the TREC 2005 Enterprise track. The evaluation results show that the voting paradigm is very effective, without using any collection specific heuristics. Moreover, we show that improving the quality of the underlying document representation can significantly improve the retrieval performance of the data fusion techniques on an expert search task. In particular, we demonstrate that applying field-based weighting models improves the ranking of candidates. Finally, we demonstrate that the relative performance of the adapted data fusion techniques for the proposed approach is stable regardless of the used weighting models
TRECVid 2007 experiments at Dublin City University
In this paper we describe our retrieval system and experiments performed for the automatic search task in TRECVid 2007. We submitted the following six automatic runs:
âą F A 1 DCU-TextOnly6: Baseline run using only ASR/MT text features.
âą F A 1 DCU-ImgBaseline4: Baseline visual expert only run, no ASR/MT used.
Made use of query-time generation of retrieval expert coefficients for fusion.
âą F A 2 DCU-ImgOnlyEnt5: Automatic generation of retrieval expert coefficients for fusion at index time.
âą F A 2 DCU-imgOnlyEntHigh3: Combination of coefficient generation which combined the coefficients generated by the query-time approach, and the index-time approach, with greater weight given to the index-time coefficient.
âą F A 2 DCU-imgOnlyEntAuto2: As above, except that greater weight is given to the query-time coefficient that was generated.
âą F A 2 DCU-autoMixed1: Query-time expert coefficient generation that used both visual and text experts
Using relevance feedback in expert search
In Enterprise settings, expert search is considered an important task. In this search task, the user has a need for expertise - for instance, they require assistance from someone about a topic of interest. An expert search system assists users with their "expertise need" by suggesting people with relevant expertise to the topic of interest. In this work, we apply an expert search approach that does not explicitly rank candidates in response to a query, but instead implicitly ranks candidates by taking into account a ranking of document with respect to the query topic. Pseudo-relevance feedback, aka query expansion, has been shown to improve retrieval performance in adhoc search tasks. In this work, we investigate to which extent query expansion can be applied in an expert search task to improve the accuracy of the generated ranking of candidates. We define two approaches for query expansion, one based on the initial of ranking of documents for the query topic. The second approach is based on the final ranking of candidates. The aims of this paper are two-fold. Firstly, to determine if query expansion can be successfully applied in the expert search task, and secondly, to ascertain if either of the two forms of query expansion can provide robust, improved retrieval performance. We perform a thorough evaluation contrasting the two query expansion approaches in the context of the TREC 2005 and 2006 Enterprise tracks
Finding Academic Experts on a MultiSensor Approach using Shannon's Entropy
Expert finding is an information retrieval task concerned with the search for
the most knowledgeable people, in some topic, with basis on documents
describing peoples activities. The task involves taking a user query as input
and returning a list of people sorted by their level of expertise regarding the
user query. This paper introduces a novel approach for combining multiple
estimators of expertise based on a multisensor data fusion framework together
with the Dempster-Shafer theory of evidence and Shannon's entropy. More
specifically, we defined three sensors which detect heterogeneous information
derived from the textual contents, from the graph structure of the citation
patterns for the community of experts, and from profile information about the
academic experts. Given the evidences collected, each sensor may define
different candidates as experts and consequently do not agree in a final
ranking decision. To deal with these conflicts, we applied the Dempster-Shafer
theory of evidence combined with Shannon's Entropy formula to fuse this
information and come up with a more accurate and reliable final ranking list.
Experiments made over two datasets of academic publications from the Computer
Science domain attest for the adequacy of the proposed approach over the
traditional state of the art approaches. We also made experiments against
representative supervised state of the art algorithms. Results revealed that
the proposed method achieved a similar performance when compared to these
supervised techniques, confirming the capabilities of the proposed framework
- âŠ