610 research outputs found

    The role of GRK2 in hypertension and regulation of GPR30

    Get PDF
    In the hypertensive state, the expression of G protein-coupled receptor kinase 2 (GRK2) level is elevated. On the other hand, the expression of GPR30, a recently discovered GPCR is greatly impaired. The current study focuses on investigating the roles of these two proteins in regulating G protein signaling under the normal and hypertensive states. Angiotensin II and vasopressin were used to examine the effects of GRK2 on Gq coupled GPCR signaling. ERK phosphorylation was proportionally enhanced with GRK2 over-expression. On the other hand, using arborization and wrinkle assays, I have shown that GRK2 acts as a negative regulator of Gs signaling in VSMCs. Aortic ring segments were used to examine the vascular reactivity mediated by GPR30. In WKY rats, the GPR30 agonists aldostrone and G1 attenuated phenylephrine mediated vasoconstriction, while the GPR30 antagonist G15 was able to block the effects of aldosterone but not G1. A wound assay was utilized to estimate the effects of GPR30 activation on endothelial cell migration and proliferation. The G1 effect on wound healing was also seen to be GPR30 independent and EC specific. Overall, these investigations suggest that altering GRK2 expression is able to regulate both Gq and Gs signaling in VSMCs. GPR30 plays a crucial role in vascular reactivity and growth regulatory mechanisms. However, GRK2 and GPR30 do not seem to co-localized or interact in the cell

    Multiplayer General Lotto game

    Full text link
    In this paper, we explore the multiplayer General Lotto Blotto game over a single battlefield, a notable variant of the Colonel Blotto game. In this version, each player employs a probability distribution for resource allocation, ensuring that the expected expenditure does not surpass their budget. We first establish the existence of a Nash equilibrium for a modified version of this game, in which there is a common threshold that no player's bid can exceed. We next extend our findings to demonstrate the existence of a Nash equilibrium in the original game, which does not incorporate this threshold. Moreover, we provide detailed characterizations of the Nash equilibrium for both the original game and its modified version. In the Nash equilibrium of the unmodified game, we observe that the upper endpoints of the supports of players' equilibrium strategies coincide, and the minimum value of a player's support above zero inversely correlates with their budget. Specifically, we present closed-form solutions for the Nash equilibrium with threshold for two players

    Towards Consistent Video Editing with Text-to-Image Diffusion Models

    Full text link
    Existing works have advanced Text-to-Image (TTI) diffusion models for video editing in a one-shot learning manner. Despite their low requirements of data and computation, these methods might produce results of unsatisfied consistency with text prompt as well as temporal sequence, limiting their applications in the real world. In this paper, we propose to address the above issues with a novel EI2^2 model towards \textbf{E}nhancing v\textbf{I}deo \textbf{E}diting cons\textbf{I}stency of TTI-based frameworks. Specifically, we analyze and find that the inconsistent problem is caused by newly added modules into TTI models for learning temporal information. These modules lead to covariate shift in the feature space, which harms the editing capability. Thus, we design EI2^2 to tackle the above drawbacks with two classical modules: Shift-restricted Temporal Attention Module (STAM) and Fine-coarse Frame Attention Module (FFAM). First, through theoretical analysis, we demonstrate that covariate shift is highly related to Layer Normalization, thus STAM employs a \textit{Instance Centering} layer replacing it to preserve the distribution of temporal features. In addition, {STAM} employs an attention layer with normalized mapping to transform temporal features while constraining the variance shift. As the second part, we incorporate {STAM} with a novel {FFAM}, which efficiently leverages fine-coarse spatial information of overall frames to further enhance temporal consistency. Extensive experiments demonstrate the superiority of the proposed EI2^2 model for text-driven video editing

    CSCI 49378: Lecture 4: Distributed File Systems

    Get PDF
    Lecture for the course: CSCI 49378: Intro to Distributed Systems and Cloud Computing - Distributed File Systems (Week Four) delivered at Hunter College in Spring 2020 by Bonan Liu as part of the Tech-in-Residence Corps program

    CSCI 49378: Lecture 8: Cloud Systems and Infrastructures II

    Get PDF
    Lecture for the course: CSCI 49378: Intro to Distributed Systems and Cloud Computing - Cloud Systems and Infrastructures II (Week Eight) delivered at Hunter College in Spring 2020 by Bonan Liu as part of the Tech-in-Residence Corps program

    CSCI 49378: Lecture 6: Cloud Computing Concepts

    Get PDF
    Lecture for the course: CSCI 49378: Intro to Distributed Systems and Cloud Computing - Cloud Computing Concepts (Week Six) delivered at Hunter College in Spring 2020 by Bonan Liu as part of the Tech-in-Residence Corps program

    DropKey

    Full text link
    In this paper, we focus on analyzing and improving the dropout technique for self-attention layers of Vision Transformer, which is important while surprisingly ignored by prior works. In particular, we conduct researches on three core questions: First, what to drop in self-attention layers? Different from dropping attention weights in literature, we propose to move dropout operations forward ahead of attention matrix calculation and set the Key as the dropout unit, yielding a novel dropout-before-softmax scheme. We theoretically verify that this scheme helps keep both regularization and probability features of attention weights, alleviating the overfittings problem to specific patterns and enhancing the model to globally capture vital information; Second, how to schedule the drop ratio in consecutive layers? In contrast to exploit a constant drop ratio for all layers, we present a new decreasing schedule that gradually decreases the drop ratio along the stack of self-attention layers. We experimentally validate the proposed schedule can avoid overfittings in low-level features and missing in high-level semantics, thus improving the robustness and stableness of model training; Third, whether need to perform structured dropout operation as CNN? We attempt patch-based block-version of dropout operation and find that this useful trick for CNN is not essential for ViT. Given exploration on the above three questions, we present the novel DropKey method that regards Key as the drop unit and exploits decreasing schedule for drop ratio, improving ViTs in a general way. Comprehensive experiments demonstrate the effectiveness of DropKey for various ViT architectures, e.g. T2T and VOLO, as well as for various vision tasks, e.g., image classification, object detection, human-object interaction detection and human body shape recovery.Comment: Accepted by CVPR202

    CSCI 49378: Lecture 3: Synchronization, Consistency and Replication

    Get PDF
    Lecture for the course: CSCI 49378: Intro to Distributed Systems and Cloud Computing - Synchronization, Consistency and Replication (Week Three) delivered at Hunter College in Spring 2020 by Bonan Liu as part of the Tech-in-Residence Corps program

    CSCI 49378: Lecture 5: Distributed Web-based Applications

    Get PDF
    Lecture for the course: CSCI 49378: Intro to Distributed Systems and Cloud Computing - Distributed Web-based Applications (Week five) delivered at Hunter College in Spring 2020 by Bonan Liu as part of the Tech-in-Residence Corps program

    CSCI 49378: Introduction to distributed system and cloudcomputing: Syllabus

    Get PDF
    Syllabus: for CSCI 49378: Introduction to Distributed Systems and Cloud Computing (Spring 2020
    • …
    corecore