254 research outputs found

    The Role of S-Nitrosylation in Valosin-Containing Protein-Mediated Cardioprotection

    Get PDF
    Aims: Valosin-containing protein (VCP) has recently been identified as a novel mediator of mitochondrial respiration and cell survival in the heart, in which increased inducible nitric oxide synthase (iNOS) expression and activity is considered an essential mechanistic link in the cardioprotection conferred by VCP. iNOS is one of the three isoforms of nitric oxide synthase (NOS) that generates nitric oxide (NO) from L-arginine, which can then react with cysteine residues in proteins to form protein S-nitrosothiols (SNOs). The study aimed to investigate whether VCP directly mediates protein S-nitrosylation in the heart through the iNOS/NO/SNO pathway. We hypothesized that VCP plays a crucial role in mediating mitochondrial protein S-nitrosylation through an iNOS-dependent mechanism in the heart. To test this hypothesis, we utilized four distinct transgenic (TG) mouse models: cardiac-specific VCP TG mice, bigenic iNOS knockout (KO) with VCP overexpression mice (VCP TG/iNOS KO−/−), cardiac-specific dominant-negative (DN) VCP TG mice, and cardiac-specific VCP KO mice. Methods and results: To investigate the potential impact of VCP on both overall and specific protein S-nitrosylation in mouse heart tissues, we utilized a biotin switch assay combined with streptavidin purification. Our results showed that VCP overexpression increased S-nitrosylation of both VCP and glyceraldehyde 3-phosphate dehydrogenase (GAPDH) in the heart, which was diminished by genetic iNOS deletion. Conversely, function inhibition of VCP resulted in a decrease in the S-nitrosylation levels of VCP and the mitochondrial respiration complex I, but did not affect the S-nitrosylation level of GAPDH in the heart. Conclusion: Taken collectively, these data provide compelling evidence that VCP could serve as a novel mediator of cardiac protein S-nitrosylation through an iNOS-dependent mechanism

    GRAIN BOUNDARY PREMELTING AND ACTIVATED SINTERING IN BINARY REFRACTORY ALLOYS

    Get PDF
    Quasi-liquid intergranular film (IGF) which has been widely observed in ceramic systems can persist into sub-solidus region whereby an analogy to Grain boundary (GB) premelting can be made. In this work, a grain boundary (GB) premelting/prewetting model in a metallic system was firstly built based on the Benedictus\u27 model and computational thermodynamics, predicting that GB disordering can start at 60-85% of the bulk solidus temperatures in selected systems. This model quantitatively explains the long-standing mystery of subsolidus activated sintering in W-Pd, W-Ni, W-Co, W-Fe and W-Cu, and it has broad applications for understanding GB-controlled transport kinetics and physical properties. Furthermore, this study demonstrates the necessity of developing GB phase diagrams as a tool for materials design. Subsequently, Grain boundary (GB) wetting and prewetting in Ni-doped Mo are systematically evaluated via characterizing well-quenched specimens and thermodynamic modeling. In contrast to prior reports, the δ-NiMo phase does not wet Mo GBs in the solid state. In the solid-liquid two-phase region, the Ni-rich liquid wets Mo GBs completely. Furthermore, high-resolution transmission electron microscopy demonstrates that nanometer-thick quasi-liquid IGFs persist at GBs into the single-phase region where the bulk liquid phase is no longer stable; this is interpreted as a case of GB prewetting. An analytical thermodynamic model is developed and validated, and this model can be extended to other systems. Furthermore, the analytical model was refined based upon Beneditus\u27 model with correction in determining interaction contribution of interfacial energy. A calculation-based GB phase diagram for Ni-Mo binary system was created and validated by comparing with GB diffusivities determined through a series of controlled sintering experiments. The dependence of GB diffusivity on doping level and temperature was examined and compared with model-predicted GB phase diagram. The consistency between GB phase diagram and GB diffusivity was evidently observed. This study revealed the existence of quasi-liquid IGF in Ni-Mo and re-confirmed our prior hypothesis proposed through work in Ni-W system. It also demonstrated further the necessity of a GB phase diagram as a new tool to guide the materials processing or design, such as selection of sintering aid and heat-treatment

    Whether and Where to Code in the Wireless Relay Channel

    Full text link
    The throughput benefits of random linear network codes have been studied extensively for wirelined and wireless erasure networks. It is often assumed that all nodes within a network perform coding operations. In energy-constrained systems, however, coding subgraphs should be chosen to control the number of coding nodes while maintaining throughput. In this paper, we explore the strategic use of network coding in the wireless packet erasure relay channel according to both throughput and energy metrics. In the relay channel, a single source communicates to a single sink through the aid of a half-duplex relay. The fluid flow model is used to describe the case where both the source and the relay are coding, and Markov chain models are proposed to describe packet evolution if only the source or only the relay is coding. In addition to transmission energy, we take into account coding and reception energies. We show that coding at the relay alone while operating in a rateless fashion is neither throughput nor energy efficient. Given a set of system parameters, our analysis determines the optimal amount of time the relay should participate in the transmission, and where coding should be performed.Comment: 11 pages, 12 figures, to be published in the IEEE JSAC Special Issue on Theories and Methods for Advanced Wireless Relay

    Localized Dimension Growth in Random Network Coding: A Convolutional Approach

    Get PDF
    We propose an efficient Adaptive Random Convolutional Network Coding (ARCNC) algorithm to address the issue of field size in random network coding. ARCNC operates as a convolutional code, with the coefficients of local encoding kernels chosen randomly over a small finite field. The lengths of local encoding kernels increase with time until the global encoding kernel matrices at related sink nodes all have full rank. Instead of estimating the necessary field size a priori, ARCNC operates in a small finite field. It adapts to unknown network topologies without prior knowledge, by locally incrementing the dimensionality of the convolutional code. Because convolutional codes of different constraint lengths can coexist in different portions of the network, reductions in decoding delay and memory overheads can be achieved with ARCNC. We show through analysis that this method performs no worse than random linear network codes in general networks, and can provide significant gains in terms of average decoding delay in combination networks.Comment: 7 pages, 1 figure, submitted to IEEE ISIT 201

    A Proposal for Network Coding with the IEEE 802.15.6 Standard

    Get PDF
    We examine the Medium Access Control sublayer of the IEEE 802.15.6 Wireless Body Area Network (WBAN) standard, and propose minor modifications to the standard so that linear random network coding can be included to help improve energy efficiency and throughput of WBANs compatible with the standard. Both generation-based and sliding window approaches are possible, and a group-block acknowledgment scheme can be implemented by modifying block acknowledgment control type frames. Discussions on potential energy and throughput advantages of network coding are provided.Semiconductor Research Corporation. Interconnect Focus Center (Subcontract RA306-S1

    Network Coding for Multi-Resolution Multicast

    Full text link
    Multi-resolution codes enable multicast at different rates to different receivers, a setup that is often desirable for graphics or video streaming. We propose a simple, distributed, two-stage message passing algorithm to generate network codes for single-source multicast of multi-resolution codes. The goal of this "pushback algorithm" is to maximize the total rate achieved by all receivers, while guaranteeing decodability of the base layer at each receiver. By conducting pushback and code generation stages, this algorithm takes advantage of inter-layer as well as intra-layer coding. Numerical simulations show that in terms of total rate achieved, the pushback algorithm outperforms routing and intra-layer coding schemes, even with codeword sizes as small as 10 bits. In addition, the performance gap widens as the number of receivers and the number of nodes in the network increases. We also observe that naiive inter-layer coding schemes may perform worse than intra-layer schemes under certain network conditions.Comment: 9 pages, 16 figures, submitted to IEEE INFOCOM 201

    On-Chip Interconnects of RFICs

    Get PDF

    Systematic Network Coding with the Aid of a Full-Duplex Relay

    Get PDF
    A characterization of systematic network coding over multi-hop wireless networks is key towards understanding the trade-off between complexity and delay performance of networks that preserve the systematic structure. This paper studies the case of a relay channel, where the source's objective is to deliver a given number of data packets to a receiver with the aid of a relay. The source broadcasts to both the receiver and the relay using one frequency, while the relay uses another frequency for transmissions to the receiver, allowing for a full-duplex operation of the relay. We analyze the decoding complexity and delay performance of two types of relays: one that preserves the systematic structure of the code from the source; another that does not. A systematic relay forwards uncoded packets upon reception, but transmits coded packets to the receiver after receiving the first coded packet from the source. On the other hand, a non-systematic relay always transmits linear combinations of previously received packets. We compare the performance of these two alternatives by analytically characterizing the expected transmission completion time as well as the number of uncoded packets forwarded by the relay. Our numerical results show that, for a poor channel between the source and the receiver, preserving the systematic structure at the relay (i) allows a significant increase in the number of uncoded packets received by the receiver, thus reducing the decoding complexity, and (ii) preserves close to optimal delay performance.Comment: 6 pages, 5 figures, submitted to IEEE Globeco
    • …
    corecore