247 research outputs found

    The Isotropic and Anisotropic Structure of Antarctica from Seismic Inversion

    Get PDF
    This dissertation utilizes multiple techniques of earthquake tomography to investigate the seismic structure of the crust and mantle beneath Antarctica. The isotropic structure of the Antarctic crust and upper mantle is now constrained by several seismic studies, but until now few studies have investigated the anisotropic structure. Therefore, in this study, I focus on the anisotropic structure, which is crucial information to understanding current or historic deformation and flow patterns, as well as the organic process of Antarctica. I use the data from the seismic stations deployed on the Antarctica continent during the past 20 years, as well as other stations in the southern hemisphere. The first project focuses on the radial anisotropy by inverting the Rayleigh and Love wave phase and group velocities from ambient noise cross-correlation to develop a new radially anisotropic velocity model for West and Central Antarctica with improved shallow crustal resolution. Group and phase velocity maps for Rayleigh and Love waves are estimated and inverted for shear wave velocity structure using a Monte Carlo method. The shallow structure is better resolved by including the Love wave data, allowing me to construct the first continental-scale sediment thickness map for Antarctica. The radial anisotropic result indicates large deformation history in both the crust and mantle of West Antarctica, as well as some potential vertical compositional heterogeneities in the crust.To better understand the anisotropic structure, the second project focuses on azimuthal anisotropy. Rayleigh wave data from ambient noise correlation is first analyzed using anisotropic phase velocity tomography at periods from 8-55 s. These results are then inverted for two azimuthal anisotropic layers, one in the shallow crust and the other in the uppermost mantle. Azimuthal anisotropy is widespread in the shallow crust of West Antarctica and is caused by the lattice-preferred orientation of the crustal minerals rather than the shaped preferred orientation caused by cracks and faults. The azimuthal anisotropy result of the uppermost mantle is similar to the teleseismic shear wave splitting measurements in much of West Antarctica, showing that the lithosphere and asthenosphere have undergone similar deformation. However, other regions, particularly in East Antarctica, show differences between the azimuthal anisotropy in the uppermost mantle from this study and shear wave splitting observations that sample a much larger depth range, suggesting that the shallow lithospheric mantle has a different anisotropy orientation from the mantle below. The adjoint tomographic inversion method, using the spectral element solver SPECFEM3D, has been used to produce a high-resolution isotropic tomographic model for Antarctica and nearby ocean basins. In the third project, I take advantage of the fact that waveform differences between two nearby seismic stations recording the same distant earthquake must be localized near the stations rather than along the entire wave path to significantly improve this model. I use double-difference measurements of the earthquake data, along with double difference kernels produced using adjoint methods, to better resolve the structure beneath the Antarctic continent. The radial anisotropic structure, in particular, is significantly improved and shows strong positive radial anisotropy beneath the Southern Transantarctic Mountains and Ellsworth Mountains, most likely due to lattice preferred orientation of mantle minerals by horizontal deformation. Our result indicates a transition from positive radial anisotropy in the uppermost mantle to low amplitude and possibly negative anisotropy at 150~250km depth in the Antarctica mantle, which is also observed in other major continents

    The SJTU System for Short-duration Speaker Verification Challenge 2021

    Full text link
    This paper presents the SJTU system for both text-dependent and text-independent tasks in short-duration speaker verification (SdSV) challenge 2021. In this challenge, we explored different strong embedding extractors to extract robust speaker embedding. For text-independent task, language-dependent adaptive snorm is explored to improve the system performance under the cross-lingual verification condition. For text-dependent task, we mainly focus on the in-domain fine-tuning strategies based on the model pre-trained on large-scale out-of-domain data. In order to improve the distinction between different speakers uttering the same phrase, we proposed several novel phrase-aware fine-tuning strategies and phrase-aware neural PLDA. With such strategies, the system performance is further improved. Finally, we fused the scores of different systems, and our fusion systems achieved 0.0473 in Task1 (rank 3) and 0.0581 in Task2 (rank 8) on the primary evaluation metric.Comment: Published by Interspeech 202

    Analysis of the early response to chemotherapy in lung cancer using apparent diffusion coefficient single-slice histogram

    Get PDF
    Purpose: To evaluate the application of apparent diffusion coefficient (ADC) values derived from diffusion-weighted imaging (DWI) using single-slice histogram analysis to study the chemotherapy responses in lung cancer.Methods: A total of 22 chemotherapy patients with advanced lung cancer from the Nanjing Drum Tower Hospital (Nanjing, China) were included in the study. We obtained DWI before and during chemotherapy, performed single-slice histogram analysis of ADC values, and assessed responses after 3 months of chemotherapy. Differences in ADC histogram parameters were compared between the responder and non-responder groups.Results: After therapy, we classified 13 as responders and 9 patients as non-responders. The recorded peak ADC value (ADCpeak) and lowest ADC value (ADClowest) did not show any significant difference in baseline ADClowest and ADCpeak between responders and non-responders. After chemotherapy, 13 responders had significant increase in ADClowest and ADCpeak compared with pre-treatment values (p < 0.001). ADClowest significantly increased in 9 non-responders (p < 0.05), although ADCpeak did not significantly increase. ADCpeak changes were significantly larger in the responder group than in the nonresponder group (p = 0.024). ADClowest changes after treatment were larger in the responder group than in the non-responder group, though not significantly.Conclusion: ADC values derived from single-slice histogram analysis may provide a useful and clinically feasible method for monitoring early chemotherapy response in patients with lung cancer.Keywords: Lung cancer, Chemotherapy, Apparent diffusion coefficient values, Diffusion-weighted imaging, Single-slice histogram analysi

    A Bias Correction Method in Meta-analysis of Randomized Clinical Trials with no Adjustments for Zero-inflated Outcomes

    Full text link
    Many clinical endpoint measures, such as the number of standard drinks consumed per week or the number of days that patients stayed in the hospital, are count data with excessive zeros. However, the zero-inflated nature of such outcomes is often ignored in analyses, which leads to biased estimates and, consequently, a biased estimate of the overall intervention effect in a meta-analysis. The current study proposes a novel statistical approach, the Zero-inflation Bias Correction (ZIBC) method, that can account for the bias introduced when using the Poisson regression model despite a high rate of zeros in the outcome distribution for randomized clinical trials. This correction method utilizes summary information from individual studies to correct intervention effect estimates as if they were appropriately estimated in zero-inflated Poisson regression models. Simulation studies and real data analyses show that the ZIBC method has good performance in correcting zero-inflation bias in many situations. This method provides a methodological solution in improving the accuracy of meta-analysis results, which is important to evidence-based medicine

    Do impulsivity and biological sex moderate associations between alcohol-related sexual willingness and behavior among young adults?

    Get PDF
    This study examined three-way interactions between baseline levels of willingness to engage in alcohol-related sexual behaviors, facets of impulsivity (i.e., urgency, lack of premeditation, and sensation seeking) and biological sex on alcohol-related sexual behaviors 6 months later. Participants were a sample of high-risk 18–25 year olds (N = 321, mean age 22.44) from a larger randomized controlled trial with eligibility criteria including engaging in unprotected sexual behavior after drinking alcohol within the past month at baseline. Results indicated females reporting high urgency and willingness levels were the most likely to engage in alcohol-related sex and to use a condom/dental dam after drinking. Males reporting low urgency levels and high sensation seeking and willingness levels engaged in more alcohol-related sex compared to females. Interventions to decrease alcohol-related sexual behavior by reducing willingness could incorporate sex-specific and impulsivity-related content, particularly related to urgency

    RiskOracle: A Minute-level Citywide Traffic Accident Forecasting Framework

    Full text link
    Real-time traffic accident forecasting is increasingly important for public safety and urban management (e.g., real-time safe route planning and emergency response deployment). Previous works on accident forecasting are often performed on hour levels, utilizing existed neural networks with static region-wise correlations taken into account. However, it is still challenging when the granularity of forecasting step improves as the highly dynamic nature of road network and inherent rareness of accident records in one training sample, which leads to biased results and zero-inflated issue. In this work, we propose a novel framework RiskOracle, to improve the prediction granularity to minute levels. Specifically, we first transform the zero-risk values in labels to fit the training network. Then, we propose the Differential Time-varying Graph neural network (DTGN) to capture the immediate changes of traffic status and dynamic inter-subregion correlations. Furthermore, we adopt multi-task and region selection schemes to highlight citywide most-likely accident subregions, bridging the gap between biased risk values and sporadic accident distribution. Extensive experiments on two real-world datasets demonstrate the effectiveness and scalability of our RiskOracle framework.Comment: 8 pages, 4 figures. Conference paper accepted by AAAI 202

    A Simulation Study of the Performance of Statistical Models for Count Outcomes with Excessive Zeros

    Full text link
    Background: Outcome measures that are count variables with excessive zeros are common in health behaviors research. There is a lack of empirical data about the relative performance of prevailing statistical models when outcomes are zero-inflated, particularly compared with recently developed approaches. Methods: The current simulation study examined five commonly used analytical approaches for count outcomes, including two linear models (with outcomes on raw and log-transformed scales, respectively) and three count distribution-based models (i.e., Poisson, negative binomial, and zero-inflated Poisson (ZIP) models). We also considered the marginalized zero-inflated Poisson (MZIP) model, a novel alternative that estimates the effects on overall mean while adjusting for zero-inflation. Extensive simulations were conducted to evaluate their the statistical power and Type I error rate across various data conditions. Results: Under zero-inflation, the Poisson model failed to control the Type I error rate, resulting in higher than expected false positive results. When the intervention effects on the zero (vs. non-zero) and count parts were in the same direction, the MZIP model had the highest statistical power, followed by the linear model with outcomes on raw scale, negative binomial model, and ZIP model. The performance of a linear model with a log-transformed outcome variable was unsatisfactory. When only one of the effects on the zero (vs. non-zero) part and the count part existed, the ZIP model had the highest statistical power. Conclusions: The MZIP model demonstrated better statistical properties in detecting true intervention effects and controlling false positive results for zero-inflated count outcomes. This MZIP model may serve as an appealing analytical approach to evaluating overall intervention effects in studies with count outcomes marked by excessive zeros

    Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation

    Full text link
    When using machine learning (ML) to aid decision-making, it is critical to ensure that an algorithmic decision is fair, i.e., it does not discriminate against specific individuals/groups, particularly those from underprivileged populations. Existing group fairness methods require equal group-wise measures, which however fails to consider systematic between-group differences. The confounding factors, which are non-sensitive variables but manifest systematic differences, can significantly affect fairness evaluation. To mitigate this problem, we believe that a fairness measurement should be based on the comparison between counterparts (i.e., individuals who are similar to each other with respect to the task of interest) from different groups, whose group identities cannot be distinguished algorithmically by exploring confounding factors. We have developed a propensity-score-based method for identifying counterparts, which prevents fairness evaluation from comparing "oranges" with "apples". In addition, we propose a counterpart-based statistical fairness index, termed Counterpart-Fairness (CFair), to assess fairness of ML models. Empirical studies on the Medical Information Mart for Intensive Care (MIMIC)-IV database were conducted to validate the effectiveness of CFair. We publish our code at \url{https://github.com/zhengyjo/CFair}.Comment: 18 pages, 5 figures, 5 table

    ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization

    Full text link
    Due to its simplicity and outstanding ability to generalize, stochastic gradient descent (SGD) is still the most widely used optimization method despite its slow convergence. Meanwhile, adaptive methods have attracted rising attention of optimization and machine learning communities, both for the leverage of life-long information and for the profound and fundamental mathematical theory. Taking the best of both worlds is the most exciting and challenging question in the field of optimization for machine learning. Along this line, we revisited existing adaptive gradient methods from a novel perspective, refreshing understanding of second moments. Our new perspective empowers us to attach the properties of second moments to the first moment iteration, and to propose a novel first moment optimizer, \emph{Angle-Calibrated Moment method} (\method). Our theoretical results show that \method is able to achieve the same convergence rate as mainstream adaptive methods. Furthermore, extensive experiments on CV and NLP tasks demonstrate that \method has a comparable convergence to SOTA Adam-type optimizers, and gains a better generalization performance in most cases.Comment: 25 pages, 4 figure
    • …
    corecore