2,890 research outputs found

    Traffic Crash Experience of a Cohort of Young Queenslanders in the Last Decade of the Twentieth Century

    Get PDF
    It is the aim of this paper to estimate the age-specific and cumulative risk with age to teenaged and young adult Queenslanders of involvement in a traffic crash, and in particular of being injured in such a crash. Crashes with only property damage have been largely ignored since they are notorious subject to under-reporting by those concerned, even if the circumstances and amount of damage make them legally reportable. Two approaches have been used, the first by following the crash fortunes to age 25 years of a cohort of school students enrolled in 1988 and 1989 to the end of the year 2000; the second estimates crash rates cross-sectionally from 1992 to 2000 for Queensland residents of the same ages as the bulk of the school cohort in those years

    Promoting Strategies to Overcome Low Health Literacy and Improve Patient Understanding in Outpatient Setting

    Get PDF
    Over 36% of US adults have low health literacy. This contributes to poorer health outcomes and increased costs for individuals and health care systems. Many strategies can be used to overcome the barrier of low health literacy and improve patient understanding in clinical encounters. As health care providers have been shown to underestimate patient\u27s needs for information and overestimate their own ability to communicate effectively with patients, these strategies should be used universally. We prepared a presentation on health literacy, its epidemiology, risk factors and implications, and strategies to overcome low health literacy and improve patient understanding. We focused most heavily on Teach-Back, a strategy to assess patient understanding. We presented this to a group of residents and attendings at EMMC Center for Family Medicine and Residency. We prepared pre-presentation and post-presentation surveys to evaluate effect of presentation.https://scholarworks.uvm.edu/fmclerk/1250/thumbnail.jp

    Grounding the Lexical Semantics of Verbs in Visual Perception using Force Dynamics and Event Logic

    Full text link
    This paper presents an implemented system for recognizing the occurrence of events described by simple spatial-motion verbs in short image sequences. The semantics of these verbs is specified with event-logic expressions that describe changes in the state of force-dynamic relations between the participants of the event. An efficient finite representation is introduced for the infinite sets of intervals that occur when describing liquid and semi-liquid events. Additionally, an efficient procedure using this representation is presented for inferring occurrences of compound events, described with event-logic expressions, from occurrences of primitive events. Using force dynamics and event logic to specify the lexical semantics of events allows the system to be more robust than prior systems based on motion profile

    Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video

    Full text link
    We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that is sufficiently expressive to represent many events yet sufficiently restrictive to support learning. We then give algorithms, along with lower and upper complexity bounds, for the subsumption and generalization problems for AMA formulas. We present a positive-examples--only specific-to-general learning method based on these algorithms. We also present a polynomial-time--computable ``syntactic'' subsumption test that implies semantic subsumption without being equivalent to it. A generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. Finally, we apply this algorithm to the task of learning relational event definitions from video and show that it yields definitions that are competitive with hand-coded ones

    Seeing What You're Told: Sentence-Guided Activity Recognition In Video

    Get PDF
    We present a system that demonstrates how the compositional structure of events, in concert with the compositional structure of language, can interplay with the underlying focusing mechanisms in video action recognition, thereby providing a medium, not only for top-down and bottom-up integration, but also for multi-modal integration between vision and language. We show how the roles played by participants (nouns), their characteristics (adjectives), the actions performed (verbs), the manner of such actions (adverbs), and changing spatial relations between participants (prepositions) in the form of whole sentential descriptions mediated by a grammar, guides the activity-recognition process. Further, the utility and expressiveness of our framework is demonstrated by performing three separate tasks in the domain of multi-activity videos: sentence-guided focus of attention, generation of sentential descriptions of video, and query-based video search, simply by leveraging the framework in different manners.Comment: To appear in CVPR 201
    • …
    corecore