1,585 research outputs found
Measuring Risk Aversion Model-Independently
We propose a new method to elicit individuals' risk preferences. Similar to Holt and Laury
(2002), we use a simple multiple price-list format. However, our method is based on a general
notion of increasing risk, which allows classifying individuals as more or less risk-averse
without assuming a specic utility framework. In a laboratory experiment we compare both
methods. Each classies individuals almost identically as risk-averse, -neutral, or -seeking.
However, classications of individuals as more or less risk-averse dier substantially. Moreover,
our approach yields higher measures of risk aversion, and only with our method these
measures are robust toward increasing stakes
Info Navigator: A visualization tool for document searching and browsing
In this paper we investigate the retrieval performance of monophonic and polyphonic queries made on a polyphonic music database. We extend the n-gram approach for full-music indexing of monophonic music data to polyphonic music using both rhythm and pitch information. We define an experimental framework for a comparative and fault-tolerance study of various n-gramming strategies and encoding levels. For monophonic queries, we focus in particular on query-by-humming systems, and for polyphonic queries on query-by-example. Error models addressed in several studies are surveyed for the fault-tolerance study. Our experiments show that different n-gramming strategies and encoding precision differ widely in their effectiveness. We present the results of our study on a collection of 6366 polyphonic MIDI-encoded music pieces
Modelling the steady state spectral energy distribution of the BL-Lac Object PKS 2155-304 using a selfconsistent SSC model
In this paper we present a fully selfconsistent SSC model with particle
acceleration due to shock and stochastic acceleration (Fermi-I and
Fermi-II-Processes respectively) to model the quiescent spectral energy
distribution (SED) observed from PKS 2155. The simultaneous August/September
2008 multiwavelength data of H.E.S.S., Fermi, RXTE, SWIFT and ATOM give new
constraints to the high-energy peak in the SED concerning its curvature. We
find that, in our model, a monoenergetic injection of electrons at into the model region, which are accelerated by Fermi-I- and
Fermi-II-processes while suffering synchrotron and inverse Compton losses,
finally leads to the observed SED of PKS 2155-30.4 shown in H.E.S.S. and
Fermi-LAT collaborations (2009). In contrast to other SSC models our parameters
arise from the jet's microphysics and the spectrum is evolving selfconsistently
from diffusion and acceleration. The -factor can be interpreted as
two counterstreaming plasmas due to the motion of the blob at a bulk factor of
and opposed moving upstream electrons at moderate Lorentz factors
with an average of .Comment: 4 figure
Recommended from our members
Adverse Drug Reaction Classification With Deep Neural Networks
We study the problem of detecting sentences describing adverse drug reactions (ADRs) and frame the problem as binary classification. We investigate different neural network (NN) architectures for ADR classification. In particular, we propose two new neural network models, Convolutional Recurrent Neural Network (CRNN) by concatenating convolutional neural networks with recurrent neural networks, and Convolutional Neural Network with Attention (CNNA) by adding attention weights into convolutional neural networks. We evaluate various NN architectures on a Twitter dataset containing informal language and an Adverse Drug Effects (ADE) dataset constructed by sampling from MEDLINE case reports. Experimental results show that all the NN architectures outperform the traditional maximum entropy classifiers trained from n-grams with different weighting strategies considerably on both datasets. On the Twitter dataset, all the NN architectures perform similarly. But on the ADE dataset, CNN performs better than other more complex CNN variants. Nevertheless, CNNA allows the visualisation of attention weights of words when making classification decisions and hence is more appropriate for the extraction of word subsequences describing ADRs
Recommended from our members
Zur Theorie künstlicher neuronaler Netze
Zur Theorie künstlicher neuronaler Netze wird aus vier Gebieten beigetragen: der Informatik mit einem neuen Lernverfahren (stabile Parameteradaption), der Mathematik mit der Analyse der Struktur des Gewichtungsraums, der Statistik mit einem neuen Schätzer für die Güte von Netzen (Clustered bootstrap) und der Physik mit effizienten Lern- und Schliesalgorithmen für dezimierbare Boltzmann-Maschinen.
Es werden Abbildungsnetze definiert, deren Kettenregel abgeleitet und in mehrere berechtigte algorithmische Varianten gefast, Backpropagation-Netze definiert, der Backpropagation-Algorithmus in einer möglichst allgemeinen Fassung dargestellt und demonstriert, wie dieser Rahmen auch auf rekurrente Netze angewendet werden kann.
Die Grenzen der Methode des Gradientenabstiegs werden aufgezeigt und bekannte alternative Verfahren kritisch dargestellt. Ausgehend davon wird unter den Gesichts- punkten Effizienz und Stabilität eine Klasse neuer miteinander verwandter Optimierungsalgorithmen entwickelt, deren theoretische Leistungsfähigkeit von einem Beweis der Konvergenz erster Ordnung abgesichert wird. Es ist möglich, Zweite-Ordnung-Information in das neue Verfahren einfliesen zu lassen. Empirische Vergleiche unter- mauern dessen Effizienz. Die Grenzen von Optimierungsverfahren werden diskutiert.
Danach wird Lernen in neuronalen Netzen als statistisches Schätzproblem aufgefast. Die Güte der Schätzung kann mit bekannten statistischen Verfahren berechnet wer- den. Es wird nachgewiesen, das durch Unzulänglichkeiten neuronalen Lernens die Angaben zur Güte nicht robust oder zu ungenau sind.
Das Bestreben, diese Unzulänglichkeiten herauszufiltern, führt auf eine neue theoretische Sichtweise des Gewichtungsraums. Er mus in natürlicher Weise als Mannigfaltigkeit verstanden werden. Es zeigt sich, das die Berechnung der kanonischen Metrik im Gewichtungsraum NP-hart ist. Zugleich wird nachgewiesen, das eine effiziente Approximation der Metrik möglich ist. Damit ist es möglich, Lernergebnisse im Gewichtungsraum zu clustern und zu visualisieren. Als eine weitere Anwendung dieser Theorie wird ein robustes Verfahren der Modellauswahl vorgestellt und an einem Beispiel vorgeführt. Schlieslich kann auch das im vorigen Absatz gestellte Problem durch ein neues Verfahren gelöst werden.
Die physikalisch motivierte Boltzmann-Maschine wird dargestellt, und es wird argumentiert, warum hier das Schliesen NP-hart ist. Dies motiviert eine Beschr¨ankung auf die genügend interessante Klasse der dezimierbaren Boltzmann-Maschinen. Eine neue Dezimierungsregel wird eingef¨uhrt und gezeigt, das es keine weiteren gibt. Dezimierbare Boltzmann-Maschinen werden mit Mitteln der Wahrscheinlichkeitstheorie studiert und effiziente Lernalgorithmen vorgeschlagen. Die Gewichtungsraumstruktur kann auch hier erfolgreich ausgenutzt werden, was eine Anwendung demonstriert
Pattern formation in the dipolar Ising model on a two-dimensional honeycomb lattice
We present Monte Carlo simulation results for a two-dimensional Ising model
with ferromagnetic nearest-neighbor couplings and a competing long-range
dipolar interaction on a honeycomb lattice. Both structural and thermodynamic
properties are very similar to the case of a square lattice, with the exception
that structures reflect the sixfold rotational symmetry of the underlying
honeycomb lattice. To deal with the long-range nature of the dipolar
interaction we also present a simple method of evaluating effective interaction
coefficients, which can be regarded as a more straightforward alternative to
the prevalent Ewald summation techniques.Comment: 5 pages, 5 figure
Recommended from our members
Multimedia resource discovery
This chapter examines the challenges and opportunities of Multimedia Information Retrieval and corresponding search engine applications. Computer technology has changed our access to information tremendously: We used to search authors or titles (which we had to know) in library cards in order to locate relevant books; now we can issue keyword searches within the full text of whole book repositories in order to identify authors, titles and locations of relevant books. What about the corresponding challenge of finding multimedia by fragments, examples and excerpts? Rather than asking for a music piece by artist and title, can we hum its tune to find it? Can doctors submit scans of a patient to identify medically similar images of diagnosed cases in a database? Can your mobile phone take a picture of a statue and tell you about its artist and significance via a service that it sends this picture to?
In an attempt to answer some of these questions we get to know basic concepts of multimedia resource discovery technologies for a number of different query and document types: piggy-back text search, i.e., reducing the multimedia to pseudo text documents; automated annotation of visual components; content-based retrieval where the query is an image; and fingerprinting to match near duplicates.
Some of the research challenges are given by the semantic gap between the simple pixel properties computers can readily index and high-level human concepts; related to this is an inherent technological limitation of automated annotation of images from pixels alone. Other challenges are given by polysemy, i.e., the many meanings and interpretations that are inherent in visual material and the corresponding wide range of a user’s information need.
This chapter demonstrates how these challenges can be tackled by automated processing and machine learning and by utilising the skills of the user, for example through browsing or through a process that is called relevance feedback, thus putting the user at centre stage. The latter is made easier by “added value” technologies, exemplified here by summaries of complex multimedia objects such as TV news, information visualisation techniques for document clusters, visual search by example, and methods to create browsable structures within the collection
DYNIQX: A novel meta-search engine for the web
The effect of metadata in collection fusion has not been sufficiently studied. In response to this, we present a novel meta-search engine called Dyniqx for metadata based search. Dyniqx integrates search results from search services of documents, images, and videos for generating a unified list of ranked search results. Dyniqx exploits the availability of metadata in search services such as PubMed, Google Scholar, Google Image Search, and Google Video Search etc for fusing search results from heterogeneous search engines. In addition, metadata from these search engines are used for generating dynamic query controls such as sliders and tick boxes etc which are used by users to filter search results. Our preliminary user evaluation shows that Dyniqx can help users complete information search tasks more efficiently and successfully than three well known search engines respectively. We also carried out one controlled user evaluation of the integration of six document/image/video based search engines (Google Scholar, PubMed, Intute, Google Image, Yahoo Image, and Google Video) in Dyniqx. We designed a questionnaire for evaluating different aspect of Dyniqx in assisting users complete search tasks. Each user used Dyniqx to perform a number of search tasks before completing the questionnaire. Our evaluation results confirm the effectiveness of the meta-search of Dyniqx in assisting user search tasks, and provide insights into better designs of the Dyniqx' interface
- …