308 research outputs found
Resemblance, Exemplification, and Ontology
According to the quantificational (neo-) Quinean model in meta-ontology, the question of ontology boils down to the question of whether a sortal property is exemplified. I address some complications that arise when we try to build a philosophical reconstruction of the link between individuals and kinds displayed in the exemplification relation from the point of view of conceptualism about kinds and having in mind this stand in ontology. I distinguish two notions of resemblance, object-to- object and object-to- kind, and show the problems with both of them. Finally, I argue for a better awareness of the implicit "bias" involved in the very notion of "resemblance, " without indulging in Quine's veto toward this notion
Ontology learning for the semantic deep web
Ontologies could play an important role in assisting users in their search for Web pages. This dissertation considers the problem of constructing natural ontologies that support users in their Web search efforts and increase the number of relevant Web pages that are returned. To achieve this goal, this thesis suggests combining the Deep Web information, which consists of dynamically generated Web pages and cannot be indexed by the existing automated Web crawlers, with ontologies, resulting in the Semantic Deep Web. The Deep Web information is exploited in three different ways: extracting attributes from the Deep Web data sources automatically, generating domain ontologies from the Deep Web automatically, and extracting instances from the Deep Web to enhance the domain ontologies. Several algorithms for the above mentioned tasks are presented. Lxperimeiital results suggest that the proposed methods assist users with finding more relevant Web sites. Another contribution of this dissertation includes developing a methodology to evaluate existing general purpose ontologies using the Web as a corpus. The quality of ontologies (QoO) is quantified by analyzing existing ontologies to get numeric measures of how natural their concepts and their relationships are. This methodology was first applied to several major, popular ontologies, such as WordNet, OpenCyc and the UMLS. Subsequently the domain ontologies developed in this research were evaluated from the naturalness perspective
A Knowledge Multidimensional Representation Model for Automatic Text Analysis and Generation: Applications for Cultural Heritage
Knowledge is information that has been contextualized in a certain domain, where it can be used and applied. Natural Language provides a most direct way to transfer knowledge at different levels of conceptual density. The opportunity provided by the evolution of the technologies of Natural Language Processing is thus of making more fluid and universal the process of knowledge transfer. Indeed, unfolding domain knowledge is one way to bring to larger audiences contents that would be otherwise restricted to specialists. This has been done so far in a totally manual way through the skills of divulgators and popular science writers. Technology provides now a way to make this transfer both less expensive and more widespread. Extracting knowledge and then generating from it suitably communicable text in natural language are the two related subtasks that need be fulfilled in order to attain the general goal. To this aim, two fields from information technology have achieved the needed maturity and can therefore be effectively combined. In fact, on the one hand Information Extraction and Retrieval (IER) can extract knowledge from texts and map it into a neutral, abstract form, hence liberating it from the stylistic constraints into which it was originated. From there, Natural Language Generation can take charge, by regenerating automatically, or semi-automatically, the extracted knowledge into texts targeting new communities.
This doctoral thesis provides a contribution to making substantial this combination through the definition and implementation of a novel multidimensional model for the representation of conceptual knowledge and of a workflow that can produce strongly customized textual descriptions.
By exploiting techniques for the generation of paraphrases and by profiling target users, applications and domains, a target-driven approach is proposed to automatically generate multiple texts from the same information core. An extended case study is described to demonstrate the effectiveness of the proposed model and approach in the Cultural Heritage application domain, so as to compare and position this contribution within the current state of the art and to outline future directions
A knowledge-based method for generating summaries of spatial movement in geographic areas
In this article we describe a method for automatically generating text summaries of data corresponding to traces of spatial movement in geographical areas. The method can help humans to understand large data streams, such as the amounts of GPS data recorded by a variety of sensors in mobile phones, cars, etc. We describe the knowledge representations we designed for our method and the main components of our method for generating the summaries: a discourse planner, an abstraction module and a text generator. We also present evaluation results that show the ability of our method to generate certain types of geospatial and temporal descriptions
Qualitätstaxonomie für skalierbare Algorithmen von Free Viewpoint Video Objekten
Diese Dissertation beabsichtigt einen Beitrag zur Qualitätsbeurteilung von Algorithmen für Bildanalyse und Bildsynthese im Anwendungskontext Videokommunikationssysteme zu leisten. In der vorliegenden Arbeit werden Möglichkeiten und Hindernisse der nutzerzentrierten Definition von subjektiver Qualitätswahrnehmung in diesem speziellen Anwendungsfall untersucht. Qualitätsbeurteilung von aufkommender Visualisierungs-Technologie und neuen Verfahren zur Erzeugung einer dreidimensionalen Repräsentation unter der Nutzung von Bildinformation zweier Kameras für Videokommunikationssysteme wurde bisher noch nicht umfangreich behandelt und passende Ansätze dazu fehlen. Die Herausforderungen sind es qualitätsbeeinflussende Faktoren zu definieren, passende Maße zu formulieren, sowie die Qualitätsevaluierung mit den Erstellungsalgorithmen, welche noch in Entwicklung sind, zu verbinden. Der Vorteil der Verlinkung von Qualitätswahrnehmung und Servicequalität ist die Unterstützung der technischen Realisierungsprozesse hinsichtlich ihrer Anpassungsfähigkeit (z.B. an das vom Nutzer verwendete System) und Skalierbarkeit (z.B. Beachtung eines Aufwands- oder Ressourcenlimits) unter Berücksichtigung des Endnutzers und dessen Qualitätsanforderungen. Die vorliegende Arbeit beschreibt den theoretischen Hintergrund und einen Vorschlag für eine Qualitätstaxonomie als verlinkendes Modell. Diese Arbeit beinhaltet eine Beschreibung des Projektes Skalalgo3d, welches den Rahmen der Anwendung darstellt. Präsentierte Ergebnisse bestehen aus einer systematischen Definition von qualitätsbeeinflussenden Faktoren inklusive eines Forschungsrahmens und Evaluierungsaktivitäten die mehr als 350 Testteilnehmer inkludieren, sowie daraus heraus definierte Qualitätsmerkmale der evaluierten Qualität der visuellen Repräsentation für Videokommunikationsanwendungen. Ein darauf basierendes Modell um diese Ergebnisse mit den technischen Erstellungsschritten zu verlinken wird zum Schluss anhand eines formalisierten Qualitätsmaßes präsentiert. Ein Flussdiagramm und ein Richtungsfeld zur grafischen Annäherung an eine differenzierbare Funktion möglicher Zusammenhänge werden daraufhin für weitere Untersuchungen vorgeschlagen.The thesis intends to make a contribution to the quality assessment of free viewpoint video objects within the context of video communication systems. The current work analyzes opportunities and obstacles, focusing on users' subjective quality of experience in this special case. Quality estimation of emerging free viewpoint video object technology in video communication has not yet been assessed and adequate approaches are missing. The challenges are to define factors that influence quality, to formulate an adequate measure of quality, and to link the quality of experience to the technical realization within an undefined and ever-changing technical realization process. There are two advantages of interlinking the quality of experience with the quality of service: First, it can benefit the technical realization process, in order to allow adaptability (e.g., based on systems used by the end users). Second, it provides an opportunity to support scalability in a user-centered way, e.g., based on a cost or resources limitation. The thesis outlines the theoretical background and introduces a user-centered quality taxonomy in the form of an interlinking model. A description of the related project Skalalgo3d is included, which offered a framework for application. The outlined results consist of a systematic definition of factors that influence quality, including a research framework, and evaluation activities involving more than 350 participants. The thesis includes the presentation of quality features, defined by evaluations of free viewpoint video object quality, for video communication application. Based on these quality features, a model that links these results with the technical creation process, including a formalized quality measure, is presented. Based on this, a flow chart and slope field are proposed. These intend the visualization of these potential relationships and may work as a starting point for further investigations thereon and to differentiate relations in form of functions
FSD50K: an Open Dataset of Human-Labeled Sound Events
Most existing datasets for sound event recognition (SER) are relatively small
and/or domain-specific, with the exception of AudioSet, based on a massive
amount of audio tracks from YouTube videos and encompassing over 500 classes of
everyday sounds. However, AudioSet is not an open dataset---its release
consists of pre-computed audio features (instead of waveforms), which limits
the adoption of some SER methods. Downloading the original audio tracks is also
problematic due to constituent YouTube videos gradually disappearing and usage
rights issues, which casts doubts over the suitability of this resource for
systems' benchmarking. To provide an alternative benchmark dataset and thus
foster SER research, we introduce FSD50K, an open dataset containing over 51k
audio clips totalling over 100h of audio manually labeled using 200 classes
drawn from the AudioSet Ontology. The audio clips are licensed under Creative
Commons licenses, making the dataset freely distributable (including
waveforms). We provide a detailed description of the FSD50K creation process,
tailored to the particularities of Freesound data, including challenges
encountered and solutions adopted. We include a comprehensive dataset
characterization along with discussion of limitations and key factors to allow
its audio-informed usage. Finally, we conduct sound event classification
experiments to provide baseline systems as well as insight on the main factors
to consider when splitting Freesound audio data for SER. Our goal is to develop
a dataset to be widely adopted by the community as a new open benchmark for SER
research
Recommended from our members
Exploiting Social Networks for Recommendation in Online Image Sharing Systems
This thesis aims to demonstrate the distinct and so far little explored value of knowledge derived from social interaction data within large web-scale image sharing systems like Flickr, Picasa Web, Facebook and others for image recommendation. I have shown how such systems can be significantly improved through personalisation that takes into account the social context of users by modelling their interactions by mining data, building and evaluating systems that incorporate this information. These improvements allow users to search and browse large online image collections more quickly and to find results that more accurately match their personal information needs when compared to existing methods.
Traditional information retrieval and recommendation datasets are contrived to provide stable baselines for researchers to compare against but they rarely accurately reflect the media systems users tend to encounter online. The online photo sharing site Flickr provides rich and varied data that can be used by researchers to analyse and understand users’ interactions with images and with each other. I analyse such data by modelling the connections between users as multigraphs and exploiting the resultant topologies to produce features that can be used to train recommender systems based on machine learnt classifiers.
The core contributions of this work include insight into the nature of very large-scale on- line photo collections and the communities that form around them, as well as the dynamic nature of the interactions users have with their media. I do this through the rigorous evaluation of both a probabilistic tag recommendation system and a machine learnt classifier trained to mimic user decisions regarding image preference. These implementations focus on treating the user as both a unique individual and as a member of potentially many explicit and implicit communities. I also explore the validity of the Flickr ‘Favourite’ feedback label as proxy for user preference, which is particularly important when considering other analogous media systems to which my findings transfer. My conclusions highlight how vital both
social context information and the understanding of user behaviour are for online image sharing systems.
In the field of information retrieval the diverse nature of users is often forgotten in the hunt for increases in esoteric performance metrics. This thesis places them back at the centre of the problem of multimedia information retrieval and shows how their variety and uniqueness are valuable traits that can be exploited to augment and improve the experience of browsing and searching shared online image collections
Recommended from our members
Recurrent Neural Network Language Generation for Dialogue Systems
Language is the principal medium for ideas, while dialogue is the most natural and effective way for humans to interact with and access information from machines. Natural language generation (NLG) is a critical component of spoken dialogue and it has a significant impact on usability and perceived quality. Many commonly used NLG systems employ rules and heuristics, which tend to generate inflexible and stylised responses without the natural variation of human language. However, the frequent repetition of identical output forms can quickly make dialogue become tedious for most real-world users. Additionally, these rules and heuristics are not scalable and hence not trivially extensible to other domains or languages. A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG. Statistical models also provide an opportunity to learn in-domain human colloquialisms and cross-domain model adaptations.
A robust and quasi-supervised NLG model is proposed in this thesis. The model leverages a Recurrent Neural Network (RNN)-based surface realiser and a gating mechanism applied to input semantics. The model is motivated by the Long-Short Term Memory (LSTM) network. The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning and surface realisation into a single optimisation problem. The single optimisation not only bypasses the costly intermediate linguistic annotations but also generates more natural and human-like responses. Furthermore, a domain adaptation study shows that the proposed model can be readily adapted and extended to new dialogue domains via a proposed recipe.
Continuing the success of end-to-end learning, the second part of the thesis speculates on building an end-to-end dialogue system by framing it as a conditional generation problem. The proposed model encapsulates a belief tracker with a minimal state representation and a generator that takes the dialogue context to produce responses. These features suggest comprehension and fast learning. The proposed model is capable of understanding requests and accomplishing tasks after training on only a few hundred human-human dialogues. A complementary Wizard-of-Oz data collection method is also introduced to facilitate the collection of human-human conversations from online workers. The results demonstrate that the proposed model can talk to human judges naturally, without any difficulty, for a sample application domain. In addition, the results also suggest that the introduction of a stochastic latent variable can help the system model intrinsic variation in communicative intention much better.Tsung-Hsien Wen's Ph.D. is supported by Toshiba Research Europe Ltd, Cambridge Research Laborator
- …