2,317 research outputs found

    Gene-by-Environment Interactions on Alcohol Use Among Asian American College Freshmen

    Get PDF
    OBJECTIVE: Among northeast Asians, the variant aldehyde dehydrogenase allele, ALDH2*2 (rs671, A/G, minor/major), has been inversely associated with alcohol dependence. The strength of the associations between ALDH2*2 and drinking behaviors depends on the developmental stage, the phenotype studied, and other moderating variables. This study examined ALDH2 gene status as a moderator of the associations between parental drinking, peer drinking, and acculturation with alcohol use among 222 Chinese American and Korean American college freshmen. METHOD: Negative binomial regressions were used to test the main and interactive effects of ALDH2 with contextual factors on alcohol frequency (drinking days) and quantity (drinks per drinking day) in the past 3 months. RESULTS: ALDH2*2 was associated with more subjective flushing symptoms and longer length of flushing but was unrelated to both alcohol frequency and quantity. Peer drinking was positively associated with both alcohol frequency and quantity, but neither was moderated by ALDH2. We observed a nonsignificant trend for the interaction between parental drinking and ALDH2 on alcohol frequency, where parental drinking was positively associated with alcohol frequency only among participants with ALDH2*2. We found a significant interaction between acculturation and ALDH2 on alcohol frequency, where acculturation was positively associated with alcohol frequency only among those with ALDH2*2. Exploratory analyses stratified by Asian ethnic subgroup indicated that this interaction was driven primarily by the Korean subsample. CONCLUSIONS: Parental drinking and acculturation may facilitate more frequent drinking among those who have more intense reactions to alcohol (i.e., those with ALDH2*2) during the transition from high school to college

    A Spatiotemporal Synthesis of High-Resolution Salinity Data with Aquaculture Applications

    Get PDF
    Technological advancement and the desire to better monitor shallow habitats in the Chesapeake Bay, Maryland, United States led to the initiation of several high-resolution monitoring programs such as ConMon (short for “Continuous Monitoring”) measuring oxygen, salinity, and chlorophyll-a at a 15-minute frequency. These monitoring efforts have yielded an enormous volume of data and insight into the condition of the tidal water of the Bay. But this information is underutilized in documenting the fine-scale variability of water quality, which is critical in identifying the link between water quality and ecological responses, partly due to the challenges in integrating monitoring data collected at different frequencies and locations. In a project to understand the environmental suitability of aquaculture sites and the future potential overlap between aquaculture and submerged aquatic vegetation, we developed a spatiotemporal synthesis of ConMon data with data from long-term, fixed-station seasonal monitoring. Here, we present our generalized additive model-based approach to predict salinity at high frequency (15 minutes) and fine spatial resolution (~100 meters) in the Maryland portion of the Bay, its major tributaries, and the shallow tidal creeks that exchange with the tributaries. Predictive performance was validated to be 1 PSU (practical salinity unit) in root mean square error using de novo monitoring. The resulting data provide insights into the environmental suitability of aquaculture, specifically the sensitivity of the Easter oyster (Crassostrea virginica) to low salinity stress. The spatiotemporal synthesis approach has potential applications for integrated monitoring and potential linkage with high-resolution water quality models for shallow habitats

    ForceSight: Text-Guided Mobile Manipulation with Visual-Force Goals

    Full text link
    We present ForceSight, a system for text-guided mobile manipulation that predicts visual-force goals using a deep neural network. Given a single RGBD image combined with a text prompt, ForceSight determines a target end-effector pose in the camera frame (kinematic goal) and the associated forces (force goal). Together, these two components form a visual-force goal. Prior work has demonstrated that deep models outputting human-interpretable kinematic goals can enable dexterous manipulation by real robots. Forces are critical to manipulation, yet have typically been relegated to lower-level execution in these systems. When deployed on a mobile manipulator equipped with an eye-in-hand RGBD camera, ForceSight performed tasks such as precision grasps, drawer opening, and object handovers with an 81% success rate in unseen environments with object instances that differed significantly from the training data. In a separate experiment, relying exclusively on visual servoing and ignoring force goals dropped the success rate from 90% to 45%, demonstrating that force goals can significantly enhance performance. The appendix, videos, code, and trained models are available at https://force-sight.github.io/
    • …
    corecore