319,017 research outputs found

    Pluto: a Monte Carlo simulation tool for hadronic physics

    Get PDF
    Pluto is a Monte-Carlo event generator designed for hadronic interactions from Pion production threshold to intermediate energies of a few GeV per nucleon, as well as for studies of heavy ion reactions. The package is entirely based on ROOT, without the need of additional packages, and uses the embedded C++ interpreter of ROOT to control the event production. The generation of events based on a single reaction chain and the storage of the resulting particle objects can be done with a few lines of a ROOT-macro. However, the complete control of the package can be taken over by the steering macro and user-defined models may be added without a recompilation of the framework. Multi-reaction cocktails can be facilitated as well using either mass-dependent or user-defined static branching ratios. The included physics uses resonance production with mass-dependent Breit-Wigner sampling. The calculation of partial and total widths for resonances producing unstable particles is performed recursively in a coupled-channel approach. Here, particular attention is paid to the electromagnetic decays, motivated by the physics program of HADES. The thermal model supports 2-component thermal distributions, longitudinal broadening, radial blast, direct and elliptic flow, and impact-parameter sampled multiplicities. The interface allows angular distribution models (e.g. for the primary meson emission) to be attached by the user as well as descriptions of multi-particle correlations using decay chain templates. The exchange of mass sampling or momentum generation models is also possible. The first feature allows for consistent coupled-channel calculations, needed for a correct description of hadronic interactions. For elementary reactions, angular distribution models for selected channels are already part of the framework, based on parameterizations of existing data. This report gives an overview of the design of the package, the included models and the user interface

    Illusory feature slowing: Evidence for perceptual models of global facial change

    Get PDF
    Upright static faces are widely thought to recruit holistic representations, whereby individual features are integrated into nondecomposable wholes for recognition and interpretation. In contrast, little is known about the perceptual integration of dynamic features when viewing moving faces. People are frequently exposed to correlated eye and mouth movements, such as the characteristic changes that accompany facial emotion, yawning, sneezing, and laughter. However, it is unclear whether the visual system is sensitive to these dynamic regularities, encoding facial behavior relative to a set of dynamic global prototypes, or whether it simply forms piecemeal descriptions of feature states over time. To address this question, we sought evidence of perceptual interactions between dynamic facial features. Crucially, we found illusory slowing of feature motion in the presence of another moving feature, but it was limited to upright faces and particular relative-phase relationships. Perceptual interactions between dynamic features suggest that local changes are integrated into models of global facial change

    Simulating Cardiac Fluid Dynamics in the Human Heart

    Full text link
    Cardiac fluid dynamics fundamentally involves interactions between complex blood flows and the structural deformations of the muscular heart walls and the thin, flexible valve leaflets. There has been longstanding scientific, engineering, and medical interest in creating mathematical models of the heart that capture, explain, and predict these fluid-structure interactions. However, existing computational models that account for interactions among the blood, the actively contracting myocardium, and the cardiac valves are limited in their abilities to predict valve performance, resolve fine-scale flow features, or use realistic descriptions of tissue biomechanics. Here we introduce and benchmark a comprehensive mathematical model of cardiac fluid dynamics in the human heart. A unique feature of our model is that it incorporates biomechanically detailed descriptions of all major cardiac structures that are calibrated using tensile tests of human tissue specimens to reflect the heart's microstructure. Further, it is the first fluid-structure interaction model of the heart that provides anatomically and physiologically detailed representations of all four cardiac valves. We demonstrate that this integrative model generates physiologic dynamics, including realistic pressure-volume loops that automatically capture isovolumetric contraction and relaxation, and predicts fine-scale flow features. None of these outputs are prescribed; instead, they emerge from interactions within our comprehensive description of cardiac physiology. Such models can serve as tools for predicting the impacts of medical devices or clinical interventions. They also can serve as platforms for mechanistic studies of cardiac pathophysiology and dysfunction, including congenital defects, cardiomyopathies, and heart failure, that are difficult or impossible to perform in patients

    Learning a Policy for Opportunistic Active Learning

    Full text link
    Active learning identifies data points to label that are expected to be the most useful in improving a supervised model. Opportunistic active learning incorporates active learning into interactive tasks that constrain possible queries during interactions. Prior work has shown that opportunistic active learning can be used to improve grounding of natural language descriptions in an interactive object retrieval task. In this work, we use reinforcement learning for such an object retrieval task, to learn a policy that effectively trades off task completion with model improvement that would benefit future tasks.Comment: EMNLP 2018 Camera Read

    Videoprompter: an ensemble of foundational models for zero-shot video understanding

    Full text link
    Vision-language models (VLMs) classify the query video by calculating a similarity score between the visual features and text-based class label representations. Recently, large language models (LLMs) have been used to enrich the text-based class labels by enhancing the descriptiveness of the class names. However, these improvements are restricted to the text-based classifier only, and the query visual features are not considered. In this paper, we propose a framework which combines pre-trained discriminative VLMs with pre-trained generative video-to-text and text-to-text models. We introduce two key modifications to the standard zero-shot setting. First, we propose language-guided visual feature enhancement and employ a video-to-text model to convert the query video to its descriptive form. The resulting descriptions contain vital visual cues of the query video, such as what objects are present and their spatio-temporal interactions. These descriptive cues provide additional semantic knowledge to VLMs to enhance their zeroshot performance. Second, we propose video-specific prompts to LLMs to generate more meaningful descriptions to enrich class label representations. Specifically, we introduce prompt techniques to create a Tree Hierarchy of Categories for class names, offering a higher-level action context for additional visual cues, We demonstrate the effectiveness of our approach in video understanding across three different zero-shot settings: 1) video action recognition, 2) video-to-text and textto-video retrieval, and 3) time-sensitive video tasks. Consistent improvements across multiple benchmarks and with various VLMs demonstrate the effectiveness of our proposed framework. Our code will be made publicly available
    • …
    corecore