172 research outputs found

    Experimental investigation of combined heat and power capacitive deionization

    Get PDF
    Capacitive Deionization (CDI) is a novel separation-based technology for desalinating brackish water. By applying an electrical voltage difference, the salt ions are removed from water and temporarily stored in the porous carbon electrodes. Once the adsorption capacity is reached, the ions are released to regenerate the electrodes as reducing or reversing the electrical polarization. CDI is a promising alternative to the state-of-the-art desalination methods due to low energy consumption and economic cost. For improving desalination performance, the ion exchange membranes can be included to construct a membrane capacitive deionization (MCDI) cell. The primary aim in this thesis is to evaluate CDI from a thermodynamic and experimental perspective and to investigate the operational conditions which may improve desalination performance and energy efficiency. Based on the theoretical modeling and experiment investigations, a heat-combined MCDI system which actively harvests waste heat is explored as a medium to improve deionization cycle efficiency. In addition, the thermal and salinity effects on CDI systems are addressed based on the electrochemical characterization and salt removal performance.M.S

    rPPG-Toolbox: Deep Remote PPG Toolbox

    Full text link
    Camera-based physiological measurement is a fast growing field of computer vision. Remote photoplethysmography (rPPG) utilizes imaging devices (e.g., cameras) to measure the peripheral blood volume pulse (BVP) via photoplethysmography, and enables cardiac measurement via webcams and smartphones. However, the task is non-trivial with important pre-processing, modeling, and post-processing steps required to obtain state-of-the-art results. Replication of results and benchmarking of new models is critical for scientific progress; however, as with many other applications of deep learning, reliable codebases are not easy to find or use. We present a comprehensive toolbox, rPPG-Toolbox, that contains unsupervised and supervised rPPG models with support for public benchmark datasets, data augmentation, and systematic evaluation: \url{https://github.com/ubicomplab/rPPG-Toolbox

    Influence of the Arctic Oscillation on the Vertical Distribution of Wintertime Ozone in the Stratosphere and Upper Troposphere over Northern Hemisphere

    Get PDF
    The influence of the Arctic Oscillation (AO) on the vertical distribution of stratospheric ozone in the Northern Hemisphere in winter is analyzed using observations and an offline chemical transport model. Positive ozone anomalies are found at low latitudes (0–30°N) and there are three negative anomaly centers in the northern mid- and high latitudes during positive AO phases. The negative anomalies are located in the Arctic middle stratosphere (~30 hPa, 70–90°N), Arctic upper troposphere/lower stratosphere (UTLS, 150–300 hPa, 70–90°N), and mid-latitude UTLS (70–300 hPa, 30–60°N). Further analysis shows that anomalous dynamical transport related to AO variability primarily controls these ozone changes. During positive AO events, positive ozone anomalies between 0–30°N at 50–150 hPa are related to the weakened meridional transport of the Brewer–Dobson circulation (BDC) and enhanced eddy transport. The negative ozone anomalies in the Arctic middle stratosphere are also caused by the weakened BDC, while the negative ozone anomalies in the Arctic UTLS are caused by the increased tropopause height, weakened BDC vertical transport, weaker exchange between the mid-latitudes and the Arctic, and enhanced ozone depletion via heterogeneous chemistry. The negative ozone anomalies in the mid-latitude UTLS are due mainly to enhanced eddy transport from the mid-latitudes to the equatorward of 30°N, while the transport of ozone-poor air from the Arctic to the mid-latitudes makes a minor contribution. Interpreting AO-related variability of stratospheric ozone, especially in the UTLS, would be helpful for the prediction of tropospheric ozone variability caused by AO

    MimicPlay: Long-Horizon Imitation Learning by Watching Human Play

    Full text link
    Imitation learning from human demonstrations is a promising paradigm for teaching robots manipulation skills in the real world. However, learning complex long-horizon tasks often requires an unattainable amount of demonstrations. To reduce the high data requirement, we resort to human play data - video sequences of people freely interacting with the environment using their hands. Even with different morphologies, we hypothesize that human play data contain rich and salient information about physical interactions that can readily facilitate robot policy learning. Motivated by this, we introduce a hierarchical learning framework named MimicPlay that learns latent plans from human play data to guide low-level visuomotor control trained on a small number of teleoperated demonstrations. With systematic evaluations of 14 long-horizon manipulation tasks in the real world, we show that MimicPlay outperforms state-of-the-art imitation learning methods in task success rate, generalization ability, and robustness to disturbances. Code and videos are available at https://mimic-play.github.ioComment: 7th Conference on Robot Learning (CoRL 2023 oral presentation

    A high triglyceride glucose index is associated with early renal impairment in the hypertensive patients

    Get PDF
    ObjectiveSerum β2-microglobulin (β2-MG) and serum cystatin C (CysC) are sensitive and reliable indicators of early renal impairment. Triglyceride glucose index (TyG) is an emerging vital indicator of insulin resistance and is associated with increased risk of hypertension. We aimed to analyze the relationship between TyG and early renal impairment in hypertensive patients.MethodsA retrospective analysis was performed on 881 hypertensive patients treated in Qinghai Provincial People, s Hospital from March 2018 to March 2021, their clinical data and corresponding laboratory index values were recorded, and the TyG index was calculated. According to the TyG index, the patients were divided into a low TyG (L-TyG) group (TyG ≤ 8.50, n=306), medium TyG (M-TyG) group (8.51≤TyG ≤ 8.94, n=281), and high TyG (H-TyG) group (TyG>8.95, n=294) in sequence by using tertiles. Then, according to serum β2-MG and CysC levels, they were divided into a normal renal function group (β2-MG ≤ 2.4 mg/L, n=700 and CysC ≤ 1.25mg/L, n=721) and a renal function injury group (β2-MG>2.4 mg/L, n=181, and CysC>1.25 mg/L, n=160). Multivariate linear regression analysis was used to analyze the influencing factors of serum β2-microglobulin and cystatin C. Multivariate Logistic regression was used to analyze the relationship between the TyG index and early renal impairment in hypertensive patients. The receiver operating characteristic curve (ROC) was used to determine the value of the TyG index in predicting early renal impairment in patients with hypertension.ResultAs the TyG index level increased, serum β2-MG and CysC levels also gradually increased. Multivariate linear regression analysis showed that TyG index was the influencing factor of serum β2-MG (B=0.060, P=0.007) and serum CysC (B=0.096, P<0.001). For every 1 standard deviation increase in the TyG index, the serum β2-MG and CysC increased by 0.06mg/L and 0.096mg/L, respectively. When compared to the normal group, the TyG level (8.91 ± 0.65 vs 8.64 ± 0.60, P<0.001) was higher in the renal impairment group with β2-MG>2.4 mg/L. The results of multivariate logistic regression analysis revealed that for every 1 standard deviation increase in the TyG index, the risk of early renal impairment in hypertensive patients increased 1.53 times (OR=1.53, 95%CI 1.006-2.303).The ROC curves showed that the TyG index was not superior to TG in predicting early renal impairment in hypertensive patients. the AUC values were 0.623 and 0.617, respectively. Then, when CysC>1.25 mg/L was used as the renal damage group, the level of TyG was still higher than that in the normal group (8.94 ± 0.67 and 8.64 ± 0.60, P<0.001). Multivariate Logistic regression analysis showed that for every 1 standard deviation increase in the TyG index, the risk of early renal impairment in hypertensive patients increased 2.82 times (OR=2.82, 95%CI 1.863-4.262). The ROC curves showed that the TyG index was not superior to TG in predicting early renal impairment in hypertensive patients. the AUC values were 0.629 and 0.626, respectively.ConclusionTyG index is an influential factor in serum β2-MG and CysC levels. The elevated TyG index levels are closely associated with the occurrence and development of early renal impairment in hypertensive patients, but it should be used cautiously in the prediction of early renal impairment

    Simulations of BEAVRS Benchmark Cycle 2 Depletion with MCS/CTF Coupling System

    Get PDF
    The quarter-core simulation of BEAVRS Cycle 2 depletion benchmark has been conducted using the MCS/CTF coupling system. MCS/CTF is a cycle-wise Picard iteration based inner-coupling code system, which couples sub-channel T/H (thermal/hydraulic) code CTF as a T/H solver in Monte Carlo neutron transport code MCS. This coupling code system has been previously applied in the BEAVRS benchmark Cycle 1 full-core simulation. The Cycle 2 depletion has been performed with T/H feedback based on the spent fuel materials composition pre-generated by the Cycle 1 depletion simulation using refueling capability of MCS code. Meanwhile, the MCS internal one-dimension T/H solver (MCS/TH1D) has been also applied in the simulation as the reference. In this paper, an analysis of the detailed criticality boron concentration and the axially integrated assembly-wise detector signals will be presented and compared with measured data based on the real operating physical conditions. Moreover, the MCS/CTF simulated results for neutronics and T/H parameters will be also compared to MCS/TH1D to figure out their difference, which proves the practical application of MCS into the BEAVRS benchmark two-cycle depletion simulations. (C) 2019 Korean Nuclear Society, Published by Elsevier Korea LLC

    Large AI Models in Health Informatics: Applications, Challenges, and the Future

    Full text link
    Large AI models, or foundation models, are models recently emerging with massive scales both parameter-wise and data-wise, the magnitudes of which can reach beyond billions. Once pretrained, large AI models demonstrate impressive performance in various downstream tasks. A prime example is ChatGPT, whose capability has compelled people's imagination about the far-reaching influence that large AI models can have and their potential to transform different domains of our lives. In health informatics, the advent of large AI models has brought new paradigms for the design of methodologies. The scale of multi-modal data in the biomedical and health domain has been ever-expanding especially since the community embraced the era of deep learning, which provides the ground to develop, validate, and advance large AI models for breakthroughs in health-related areas. This article presents a comprehensive review of large AI models, from background to their applications. We identify seven key sectors in which large AI models are applicable and might have substantial influence, including 1) bioinformatics; 2) medical diagnosis; 3) medical imaging; 4) medical informatics; 5) medical education; 6) public health; and 7) medical robotics. We examine their challenges, followed by a critical discussion about potential future directions and pitfalls of large AI models in transforming the field of health informatics.Comment: This article has been accepted for publication in IEEE Journal of Biomedical and Health Informatic

    Nanomechanical Resonators: Toward Atomic Scale

    Get PDF
    The quest for realizing and manipulating ever smaller man-made movable structures and dynamical machines has spurred tremendous endeavors, led to important discoveries, and inspired researchers to venture to new grounds. Scientific feats and technological milestones of miniaturization of mechanical structures have been widely accomplished by advances in machining and sculpturing ever shrinking features out of bulk materials such as silicon. With the flourishing multidisciplinary field of low-dimensional nanomaterials, including one-dimensional (1D) nanowires/nanotubes, and two-dimensional (2D) atomic layers such as graphene/phosphorene, growing interests and sustained efforts have been devoted to creating mechanical devices toward the ultimate limit of miniaturization— genuinely down to the molecular or even atomic scale. These ultrasmall movable structures, particularly nanomechanical resonators that exploit the vibratory motion in these 1D and 2D nano-to-atomic-scale structures, offer exceptional device-level attributes, such as ultralow mass, ultrawide frequency tuning range, broad dynamic range, and ultralow power consumption, thus holding strong promises for both fundamental studies and engineering applications. In this Review, we offer a comprehensive overview and summary of this vibrant field, present the state-of-the-art devices and evaluate their specifications and performance, outline important achievements, and postulate future directions for studying these miniscule yet intriguing molecular-scale machines
    corecore