989,469 research outputs found

    Genes2WordCloud: a quick way to identify biological themes from gene lists and free text

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Word-clouds recently emerged on the web as a solution for quickly summarizing text by maximizing the display of most relevant terms about a specific topic in the minimum amount of space. As biologists are faced with the daunting amount of new research data commonly presented in textual formats, word-clouds can be used to summarize and represent biological and/or biomedical content for various applications.</p> <p>Results</p> <p>Genes2WordCloud is a web application that enables users to quickly identify biological themes from gene lists and research relevant text by constructing and displaying word-clouds. It provides users with several different options and ideas for the sources that can be used to generate a word-cloud. Different options for rendering and coloring the word-clouds give users the flexibility to quickly generate customized word-clouds of their choice.</p> <p>Methods</p> <p>Genes2WordCloud is a word-cloud generator and a word-cloud viewer that is based on WordCram implemented using Java, Processing, AJAX, mySQL, and PHP. Text is fetched from several sources and then processed to extract the most relevant terms with their computed weights based on word frequencies. Genes2WordCloud is freely available for use online; it is open source software and is available for installation on any web-site along with supporting documentation at <url>http://www.maayanlab.net/G2W</url>.</p> <p>Conclusions</p> <p>Genes2WordCloud provides a useful way to summarize and visualize large amounts of textual biological data or to find biological themes from several different sources. The open source availability of the software enables users to implement customized word-clouds on their own web-sites and desktop applications.</p

    Reinforcement Learning for Racecar Control

    Get PDF
    This thesis investigates the use of reinforcement learning to learn to drive a racecar in the simulated environment of the Robot Automobile Racing Simulator. Real-life race driving is known to be difficult for humans, and expert human drivers use complex sequences of actions. There are a large number of variables, some of which change stochastically and all of which may affect the outcome. This makes driving a promising domain for testing and developing Machine Learning techniques that have the potential to be robust enough to work in the real world. Therefore the principles of the algorithms from this work may be applicable to a range of problems. The investigation starts by finding a suitable data structure to represent the information learnt. This is tested using supervised learning. Reinforcement learning is added and roughly tuned, and the supervised learning is then removed. A simple tabular representation is found satisfactory, and this avoids difficulties with more complex methods and allows the investigation to concentrate on the essentials of learning. Various reward sources are tested and a combination of three are found to produce the best performance. Exploration of the problem space is investigated. Results show exploration is essential but controlling how much is done is also important. It turns out the learning episodes need to be very long and because of this the task needs to be treated as continuous by using discounting to limit the size of the variables stored. Eligibility traces are used with success to make the learning more efficient. The tabular representation is made more compact by hashing and more accurate by using smaller buckets. This slows the learning but produces better driving. The improvement given by a rough form of generalisation indicates the replacement of the tabular method by a function approximator is warranted. These results show reinforcement learning can work within the Robot Automobile Racing Simulator, and lay the foundations for building a more efficient and competitive agent

    Device-independent dimension test in a multiparty Bell experiment

    Full text link
    A device-independent dimension test for a Bell experiment aims to estimate the underlying Hilbert space dimension that is required to produce given measurement statistical data without any other assumptions concerning the quantum apparatus. Previous work mostly deals with the two-party version of this problem. In this paper, we propose a very general and robust approach to test the dimension of any subsystem in a multiparty Bell experiment. Our dimension test stems from the study of a new multiparty scenario which we call prepare-and-distribute. This is like the prepare-and-measure scenario, but the quantum state is sent to multiple, non-communicating parties. Through specific examples, we show that our test results can be tight. Furthermore, we compare the performance of our test to results based on known bipartite tests, and witness remarkable advantage, which indicates that our test is of a true multiparty nature. We conclude by pointing out that with some partial information about the quantum states involved in the experiment, it is possible to learn other interesting properties beyond dimension.Comment: 10 pages, 2 figure

    An Information-Theoretic Measure of Uncertainty due to Quantum and Thermal Fluctuations

    Full text link
    We study an information-theoretic measure of uncertainty for quantum systems. It is the Shannon information II of the phase space probability distribution \la z | \rho | z \ra , where |z \ra are coherent states, and ρ\rho is the density matrix. The uncertainty principle is expressed in this measure as I1I \ge 1. For a harmonic oscillator in a thermal state, II coincides with von Neumann entropy, - \Tr(\rho \ln \rho), in the high-temperature regime, but unlike entropy, it is non-zero at zero temperature. It therefore supplies a non-trivial measure of uncertainty due to both quantum and thermal fluctuations. We study II as a function of time for a class of non-equilibrium quantum systems consisting of a distinguished system coupled to a heat bath. We derive an evolution equation for II. For the harmonic oscillator, in the Fokker-Planck regime, we show that II increases monotonically. For more general Hamiltonians, II settles down to monotonic increase in the long run, but may suffer an initial decrease for certain initial states that undergo ``reassembly'' (the opposite of quantum spreading). Our main result is to prove, for linear systems, that II at each moment of time has a lower bound ItminI_t^{min}, over all possible initial states. This bound is a generalization of the uncertainty principle to include thermal fluctuations in non-equilibrium systems, and represents the least amount of uncertainty the system must suffer after evolution in the presence of an environment for time tt.Comment: 36 pages (revised uncorrupted version), Report IC 92-93/2

    Multi Agent Modelling: Evolution and Skull Thickness in Hominids

    Get PDF
    Within human evolution, the period of Homo Erectus is particularly interesting since in this period, our ancestors have carried thicker skulls than the species both before and after them. There are competing theories as to the reasons of this enlargement and its reversal. One of these is the theory that Homo Erectus males fought for females by clubbing each other on the head. The other one says that due to the fact that Homo Erectus’ did not cook their food at all, they had to have strong jaw muscles attached to ridges on either side of the skull which prohibited brain and skull growth but required the skull to be thick. The re-thinning of the skull on the other hand might be due to the fact that a thick skull provided poor cooling for the brain or that as hominids started using tools to cut their food and using fire to cook it, they did not require the strong jaw muscles anymore and this trait was actually selected against since the brain had a tendency to grow and the ridges and a thick skull were preventing this. In this paper we simulated both the fighting and the diet as ways in which the hominid skull grew thicker. We also added other properties such as cooperation, selfishness and vision to our agents and analyzed their changes over generations. Keywords: Evolution, Skull Thickness, Hominids, Multi-Agent Modeling, Genetic Algorithm

    Comments on entanglement entropy

    Get PDF
    A new interpretation of entanglement entropy is proposed: entanglement entropy of a pure state with respect to a division of a Hilbert space into two subspaces 1 and 2 is an amount of information, which can be transmitted through 1 and 2 from a system interacting with 1 to another system interacting with 2. The transmission medium is quantum entanglement between 1 and 2. In order to support the interpretation, suggestive arguments are given: variational principles in entanglement thermodynamics and quantum teleportation. It is shown that a quantum state having maximal entanglement entropy plays an important role in quantum teleportation. Hence, the entanglement entropy is, in some sense, an index of efficiency of quantum teleportation. Finally, implications for the information loss problem and Hawking radiation are discussed.Comment: Latex, 24 pages, proofs of some equations are added in appendices; Accepted for publication in Physical Review

    How to Find More Supernovae with Less Work: Object Classification Techniques for Difference Imaging

    Get PDF
    We present the results of applying new object classification techniques to difference images in the context of the Nearby Supernova Factory supernova search. Most current supernova searches subtract reference images from new images, identify objects in these difference images, and apply simple threshold cuts on parameters such as statistical significance, shape, and motion to reject objects such as cosmic rays, asteroids, and subtraction artifacts. Although most static objects subtract cleanly, even a very low false positive detection rate can lead to hundreds of non-supernova candidates which must be vetted by human inspection before triggering additional followup. In comparison to simple threshold cuts, more sophisticated methods such as Boosted Decision Trees, Random Forests, and Support Vector Machines provide dramatically better object discrimination. At the Nearby Supernova Factory, we reduced the number of non-supernova candidates by a factor of 10 while increasing our supernova identification efficiency. Methods such as these will be crucial for maintaining a reasonable false positive rate in the automated transient alert pipelines of upcoming projects such as PanSTARRS and LSST.Comment: 25 pages; 6 figures; submitted to Ap
    corecore