467 research outputs found

    Finding a Place in the History of Feminist Television: Sexuality in HBO\u27s Girls

    Get PDF
    This essay analyzes the HBO series Girls (2012-) from a feminist media studies perspective. Through an in depth analysis of the history of feminist television, this paper claims that Girls takes a pro-sex feminist stance on issues of sexuality and identity and therefore progresses the timeline of depictions of feminism in prime-time television. A discussion of the socio-political debates between feminists during the Women’s Liberation Movement, known as the “Sex Wars”, serves to anchor the series to a specific feminist discourse. Ultimately Girls utilizes its coming of age and sex comedy narrative to discuss the uncertainty that comes with exploring one’s sexual identity during early adulthood. The series’ dealings with expressions of conflicting sexual identities, discussions of reproductive justice and themes of complex female friendship furthermore connect it to programs from past decades that were, just as Girls is now, feminist landmarks of their time

    Enhancement of the SynCardia Total Artificial Heart for Pediatric Use

    Get PDF
    Pediatric patients with disorders and diseases of the heart have limited options with regards to implantable devices. Many of these implants are ventricular assist devices, which is not always suitable for a patient. Total artificial hearts (TAHs) have supported many adult patients until transplantation, and we believe that they could do the same for pediatric patients. SynCardia has the only Food and Drug Administration (FDA) approved TAH devices. Since SynCardia is the only company with FDA approved TAHs, we decided to modify the design of the SynCardia TAH for use in pediatric patients without compromising the function of the current device. This paper reports on the process for the design modification to the TAH. The project resulted in detailed drawings of our proposed device, a risk mitigation summary, and a partial verification report

    High Level Learning Using the Temporal Features of Human Demonstrated Sequential Tasks

    Get PDF
    Modelling human-led demonstrations of high-level sequential tasks is fundamental to a number of practical inference applications including vision-based policy learning and activity recognition. Demonstrations of these tasks are captured as videos with long durations and similar spatial contents. Learning from this data is challenging since inference cannot be conducted solely on spatial feature presence and must instead consider how spatial features play out across time. To be successful these temporal representations must generalize to variations in the duration of activities and be able to capture relationships between events expressed across the scale of an entire video. Contemporary deep learning architectures that represent time (convolution-based and Recurrent Neural Networks) do not address these concerns. Representations learned by these models describe temporal features in terms of fixed durations such as minutes, seconds, and frames. They are also developed sequentially and must use unreasonably large models to capture temporal features expressed at scale. Probabilistic temporal models have been successful in representing the temporal information of videos in a duration invariant manner that is robust to scale, however, this has only been accomplished through the use of user-defined spatial features. Such abstractions make unrealistic assumptions about the content being expressed in these videos, the quality of the perception model, and they also limit the potential applications of trained models. To that end, I present D-ITR-L, a temporal wrapper that extends the spatial features extracted from a typically CNN architecture and transforms them into temporal features. D-ITR-L-derived temporal features are duration invariant and can identify temporal relationships between events at the scale of a full video. Validation of this claim is conducted through various vision-based policy learning and action recognition settings. Additionally, these studies show that challenging visual domains such as human-led demonstration of high-level sequential tasks can be effectively represented when using a D-ITR-L-based model

    High Level Learning Using the Temporal Features of Human Demonstrated Sequential Tasks

    Get PDF
    Modelling human-led demonstrations of high-level sequential tasks is fundamental to a number of practical inference applications including vision-based policy learning and activity recognition. Demonstrations of these tasks are captured as videos with long durations and similar spatial contents. Learning from this data is challenging since inference cannot be conducted solely on spatial feature presence and must instead consider how spatial features play out across time. To be successful these temporal representations must generalize to variations in the duration of activities and be able to capture relationships between events expressed across the scale of an entire video. Contemporary deep learning architectures that represent time (convolution-based and Recurrent Neural Networks) do not address these concerns. Representations learned by these models describe temporal features in terms of fixed durations such as minutes, seconds, and frames. They are also developed sequentially and must use unreasonably large models to capture temporal features expressed at scale. Probabilistic temporal models have been successful in representing the temporal information of videos in a duration invariant manner that is robust to scale, however, this has only been accomplished through the use of user-defined spatial features. Such abstractions make unrealistic assumptions about the content being expressed in these videos, the quality of the perception model, and they also limit the potential applications of trained models. To that end, I present D-ITR-L, a temporal wrapper that extends the spatial features extracted from a typically CNN architecture and transforms them into temporal features. D-ITR-L-derived temporal features are duration invariant and can identify temporal relationships between events at the scale of a full video. Validation of this claim is conducted through various vision-based policy learning and action recognition settings. Additionally, these studies show that challenging visual domains such as human-led demonstration of high-level sequential tasks can be effectively represented when using a D-ITR-L-based model

    8/28 - 12/8

    Get PDF
    Senior Project submitted to The Division of Arts of Bard College

    Enhancing Dental Aligners with Direct 3D Printing Manufacturing

    Get PDF
    Replacement of thermoforming with direct 3d printed aligners, molding appliances for dental and alveolar movement. Currently the process involved requires 3d printing models and plastic thermoforming, trimming and polishing to fabricate the end product. If possible a product that could be 3d printed that had stress retention, crack resistance, and stain resistance properties delivered in thickness between .030 and .040 inches

    Exploratory Learning Using Consistency Problems: Activity Type Matters

    Get PDF
    Studies have shown that exploration before instruction can improve learning. Students (N= 197) from the psychology participant pool were taught the concept and procedure of standard deviation in one of four conditions. Students were given both direct instruction and a problem to solve in one of two orders: instruction-first, or exploration-first. During the problem-solving activity, students were asked to determine the consistency of a set of numbers. This dataset was set up as a rich dataset, or to highlight contrasting cases. Students then completed a posttest. We compared mean posttest scores to find that exploration before instruction led to better understanding when using contrasting cases, but not a rich dataset. Exploring before instruction is benefited when students are helped to discern the key features of the problems, using contrasting cases.https://ir.library.louisville.edu/uars/1018/thumbnail.jp
    • …
    corecore