81 research outputs found

    What would Jaws do? The tyranny of film and the relationship between gaze and higher-level narrative film comprehension

    Get PDF
    What is the relationship between film viewers’ eye movements and their film comprehension? Typical Hollywood movies induce strong attentional synchrony—most viewers look at the same things at the same time. Thus, we asked whether film viewers’ eye movements would differ based on their understanding—the mental model hypothesis—or whether any such differences would be overwhelmed by viewers’ attentional synchrony—the tyranny of film hypothesis. To investigate this question, we manipulated the presence/absence of prior film context and measured resulting differences in film comprehension and eye movements. Viewers watched a 12-second James Bond movie clip, ending just as a critical predictive inference should be drawn that Bond’s nemesis, “Jaws,” would fall from the sky onto a circus tent. The No-context condition saw only the 12-second clip, but the Context condition also saw the preceding 2.5 minutes of the movie before seeing the critical 12-second portion. Importantly, the Context condition viewers were more likely to draw the critical inference and were more likely to perceive coherence across the entire 6 shot sequence (as shown by event segmentation), indicating greater comprehension. Viewers’ eye movements showed strong attentional synchrony in both conditions as compared to a chance level baseline, but smaller differences between conditions. Specifically, the Context condition viewers showed slightly, but significantly, greater attentional synchrony and lower cognitive load (as shown by fixation probability) during the critical first circus tent shot. Thus, overall, the results were more consistent with the tyranny of film hypothesis than the mental model hypothesis. These results suggest the need for a theory that encompasses processes from the perception to the comprehension of film

    Discovery of a z = 0.65 post-starburst BAL quasar in the DES supernova fields

    Get PDF
    We present the discovery of a z = 0.65 low-ionization broad absorption line (LoBAL) quasar in a post-starburst galaxy in data from the Dark Energy Survey (DES) and spectroscopy from the Australian Dark Energy Survey (OzDES). LoBAL quasars are a minority of all BALs, and rarer still is that this object also exhibits broad Fe II (an FeLoBAL) and Balmer absorption. This is the first BAL quasar that has signatures of recently truncated star formation, which we estimate ended about 40 Myr ago. The characteristic signatures of an FeLoBAL require high column densities, which could be explained by the emergence of a young quasar from an early, dust-enshrouded phase, or by clouds compressed by a blast wave. The age of the starburst component is comparable to estimates of the lifetime of quasars, so if we assume the quasar activity is related to the truncation of the star formation, this object is better explained by the blast wave scenario

    Searching for dark matter annihilation in recently discovered Milky Way satellites with Fermi-LAT

    Get PDF
    We search for excess Îł-ray emission coincident with the positions of confirmed and candidate Milky Way satellite galaxies using six years of data from the Fermi Large Area Telescope (LAT). Our sample of 45 stellar systems includes 28 kinematically confirmed dark-matter-dominated dwarf spheroidal galaxies (dSphs) and 17 recently discovered systems that have photometric characteristics consistent with the population of known dSphs. For each of these targets, the relative predicted Îł-ray flux due to dark matter annihilation is taken from kinematic analysis if available, and estimated from a distance-based scaling relation otherwise, assuming that the stellar systems are DM-dominated dSphs. LAT data coincident with four of the newly discovered targets show a slight preference (each ~2σ local) for Îł-ray emission in excess of the background. However, the ensemble of derived Îł-ray flux upper limits for individual targets is consistent with the expectation from analyzing random blank-sky regions, and a combined analysis of the population of stellar systems yields no globally significant excess (global significance 1 TeV and mDM,t+t-> 70 GeV) and weakening by a factor of ~1.5 at lower masses relative to previously observed limits

    Redshift distributions of galaxies in the Dark Energy Survey Science Verification shear catalogue and implications for weak lensing

    Get PDF
    We present photometric redshift estimates for galaxies used in the weak lensing analysis of the Dark Energy Survey Science Verification (DES SV) data. Four model- or machine learning-based photometric redshift methods—ANNZ2, BPZ calibrated against BCC-Ufig simulations, SKYNET, and TPZ—are analyzed. For training, calibration, and testing of these methods, we construct a catalogue of spectroscopically confirmed galaxies matched against DES SV data. The performance of the methods is evaluated against the matched spectroscopic catalogue, focusing on metrics relevant for weak lensing analyses, with additional validation against COSMOS photo-z’s. From the galaxies in the DES SV shear catalogue, which have mean redshift 0.72 0.01 over the range 0.3 < z < 1.3, we construct three tomographic bins with means of z ÂŒ f0.45; 0.67; 1.00g. These bins each have systematic uncertainties ÎŽz â‰Č 0.05 in the mean of the fiducial SKYNET photo-z nĂ°zÞ. We propagate the errors in the redshift distributions through to their impact on cosmological parameters estimated with cosmic shear, and find that they cause shifts in the value of σ8 of approximately 3%. This shift is within the one sigma statistical errors on σ8 for the DES SV shear catalogue. We further study the potential impact of systematic differences on the critical surface density, ÎŁcrit, finding levels of bias safely less than the statistical power of DES SV data. We recommend a final Gaussian prior for the photo-z bias in the mean of nĂ°zÞ of width 0.05 for each of the three tomographic bins, and show that this is a sufficient bias model for the corresponding cosmology analysis

    Estimating the global conservation status of more than 15,000 Amazonian tree species

    Get PDF
    Estimates of extinction risk for Amazonian plant and animal species are rare and not often incorporated into land-use policy and conservation planning. We overlay spatial distribution models with historical and projected deforestation to show that at least 36% and up to 57% of all Amazonian tree species are likely to qualify as globally threatened under International Union for Conservation of Nature (IUCN) Red List criteria. If confirmed, these results would increase the number of threatened plant species on Earth by 22%. We show that the trends observed in Amazonia apply to trees throughout the tropics, and we predict thatmost of the world’s >40,000 tropical tree species now qualify as globally threatened. A gap analysis suggests that existing Amazonian protected areas and indigenous territories will protect viable populations of most threatened species if these areas suffer no further degradation, highlighting the key roles that protected areas, indigenous peoples, and improved governance can play in preventing large-scale extinctions in the tropics in this century

    Whole-genome sequencing for prediction of Mycobacterium tuberculosis drug susceptibility and resistance : a retrospective cohort study

    Get PDF
    BACKGROUND : Diagnosing drug-resistance remains an obstacle to the elimination of tuberculosis. Phenotypic drugsusceptibility testing is slow and expensive, and commercial genotypic assays screen only common resistancedetermining mutations. We used whole-genome sequencing to characterise common and rare mutations predicting drug resistance, or consistency with susceptibility, for all fi rst-line and second-line drugs for tuberculosis. METHODS : Between Sept 1, 2010, and Dec 1, 2013, we sequenced a training set of 2099 Mycobacterium tuberculosis genomes. For 23 candidate genes identifi ed from the drug-resistance scientifi c literature, we algorithmically characterised genetic mutations as not conferring resistance (benign), resistance determinants, or uncharacterised. We then assessed the ability of these characterisations to predict phenotypic drug-susceptibility testing for an independent validation set of 1552 genomes. We sought mutations under similar selection pressure to those characterised as resistance determinants outside candidate genes to account for residual phenotypic resistance. FINDINGS : We characterised 120 training-set mutations as resistance determining, and 772 as benign. With these mutations, we could predict 89·2% of the validation-set phenotypes with a mean 92·3% sensitivity (95% CI 90·7–93·7) and 98·4% specifi city (98·1–98·7). 10·8% of validation-set phenotypes could not be predicted because uncharacterised mutations were present. With an in-silico comparison, characterised resistance determinants had higher sensitivity than the mutations from three line-probe assays (85·1% vs 81·6%). No additional resistance determinants were identifi ed among mutations under selection pressure in non-candidate genes. INTERPRETATION : A broad catalogue of genetic mutations enable data from whole-genome sequencing to be used clinically to predict drug resistance, drug susceptibility, or to identify drug phenotypes that cannot yet be genetically predicted. This approach could be integrated into routine diagnostic workfl ows, phasing out phenotypic drugsusceptibility testing while reporting drug resistance early.Wellcome Trust, National Institute of Health Research, Medical Research Council, and the European Union.http://www.thelancet.com/infectionhb201

    Volume I. Introduction to DUNE

    Get PDF
    The preponderance of matter over antimatter in the early universe, the dynamics of the supernovae that produced the heavy elements necessary for life, and whether protons eventually decay—these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our universe, its current state, and its eventual fate. The Deep Underground Neutrino Experiment (DUNE) is an international world-class experiment dedicated to addressing these questions as it searches for leptonic charge-parity symmetry violation, stands ready to capture supernova neutrino bursts, and seeks to observe nucleon decay as a signature of a grand unified theory underlying the standard model. The DUNE far detector technical design report (TDR) describes the DUNE physics program and the technical designs of the single- and dual-phase DUNE liquid argon TPC far detector modules. This TDR is intended to justify the technical choices for the far detector that flow down from the high-level physics goals through requirements at all levels of the Project. Volume I contains an executive summary that introduces the DUNE science program, the far detector and the strategy for its modular designs, and the organization and management of the Project. The remainder of Volume I provides more detail on the science program that drives the choice of detector technologies and on the technologies themselves. It also introduces the designs for the DUNE near detector and the DUNE computing model, for which DUNE is planning design reports. Volume II of this TDR describes DUNE\u27s physics program in detail. Volume III describes the technical coordination required for the far detector design, construction, installation, and integration, and its organizational structure. Volume IV describes the single-phase far detector technology. A planned Volume V will describe the dual-phase technology

    Animal-borne telemetry: An integral component of the ocean observing toolkit

    Get PDF
    Animal telemetry is a powerful tool for observing marine animals and the physical environments that they inhabit, from coastal and continental shelf ecosystems to polar seas and open oceans. Satellite-linked biologgers and networks of acoustic receivers allow animals to be reliably monitored over scales of tens of meters to thousands of kilometers, giving insight into their habitat use, home range size, the phenology of migratory patterns and the biotic and abiotic factors that drive their distributions. Furthermore, physical environmental variables can be collected using animals as autonomous sampling platforms, increasing spatial and temporal coverage of global oceanographic observation systems. The use of animal telemetry, therefore, has the capacity to provide measures from a suite of essential ocean variables (EOVs) for improved monitoring of Earth's oceans. Here we outline the design features of animal telemetry systems, describe current applications and their benefits and challenges, and discuss future directions. We describe new analytical techniques that improve our ability to not only quantify animal movements but to also provide a powerful framework for comparative studies across taxa. We discuss the application of animal telemetry and its capacity to collect biotic and abiotic data, how the data collected can be incorporated into ocean observing systems, and the role these data can play in improved ocean management

    Narrative comprehension guides eye movements in the absence of motion

    No full text
    Viewers’ attentional selection while looking at scenes is affected by both top-down and bottom-up factors. However, when watching film, viewers typically attend to the movie similarly irrespective of top-down factors – a phenomenon we call the Tyranny of Film. A key difference between still pictures and film is that film contains motion, which is a strong attractor of attention, and highly predictive of gaze during film viewing. The goal of the present study was to test if the Tyranny of Film is driven by motion. To do this, we created a slideshow presentation of the opening scene of Touch of Evil. Context condition participants watched the full slideshow. No-context condition participants did not see the opening portion of the scene, which showed someone placing a time-bomb into the trunk of a car. In prior research, we showed that despite producing very different understandings of the clip, this manipulation did not affect viewers’ attention (i.e., the Tyranny of Film), as both Context and No-context participants were equally likely to fixate on the car with the bomb when the scene was presented as a film. The current study found that when the scene was shown as a slideshow, the context manipulation produced differences in attentional selection (i.e., it attenuated attentional synchrony). We discuss these results in the context of the Scene Perception & Event Comprehension Theory (SPECT), which specifies the relationship between event comprehension and attentional selection in the context of visual narratives
    • 

    corecore