7,265 research outputs found

    A new xinjiangchelyid turtle from the Middle Jurassic of Xinjiang, China and the evolution of the basipterygoid process in Mesozoic turtles

    Get PDF
    Background: Most turtles from the Middle and Late Jurassic of Asia are referred to the newly defined clade Xinjiangchelyidae, a group of mostly shell-based, generalized, small to mid-sized aquatic froms that are widely considered to represent the stem lineage of Cryptodira. Xinjiangchelyids provide us with great insights into the plesiomorphic anatomy of crown-cryptodires, the most diverse group of living turtles, and they are particularly relevant for understanding the origin and early divergence of the primary clades of extant turtles. Results: Exceptionally complete new xinjiangchelyid material from the ?Qigu Formation of the Turpan Basin (Xinjiang Autonomous Province, China) provides new insights into the anatomy of this group and is assigned to Xinjiangchelys wusu n. sp. A phylogenetic analysis places Xinjiangchelys wusu n. sp. in a monophyletic polytomy with other xinjiangchelyids, including Xinjiangchelys junggarensis, X. radiplicatoides, X. levensis and X. latiens. However, the analysis supports the unorthodox, though tentative placement of xinjiangchelyids and sinemydids outside of crown-group Testudines. A particularly interesting new observation is that the skull of this xinjiangchelyid retains such primitive features as a reduced interpterygoid vacuity and basipterygoid processes. Conclusions: The homology of basipterygoid processes is confidently demonstrated based on a comprehensive review of the basicranial anatomy of Mesozoic turtles and a new nomenclatural system is introduced for the carotid canal system of turtles. The loss of the basipterygoid process and the bony enclosure of the carotid circulation system occurred a number of times independently during turtle evolution suggesting that the reinforcement of the basicranial region was essential for developing a rigid skull, thus paralleling the evolution of other amniote groups with massive skulls. © 2013 Rabi et al.; licensee BioMed Central Ltd

    Dimensional Changes in the Skulls of Ancient Children with Age in Xinjiang, China

    Get PDF
    Many scholars have conducted research on the growth patterns of children’s skulls in terms of skull size, head circumference, cranial cavity volume, and so forth. This study compared and analyzed 20 skull measurement indexes of different ages from 38 children’s skulls (aged 2–15) and 87 adult female skulls (aged 20–40) at the Zaghunluq cemetery in Xinjiang, China, in an attempt to figure out how the size Children’s of ancient children’s skulls changed with age. Analysis of variance (ANOVA) showed that there were significant differences between the six age groups (2 years, 3–5 years, 6–8 years, 9–11 years, 12–15 years, and adults) in terms of metrical cranial traits, cranial area, and cranial cavity volume. The study indicated that the skull kept growing from ages 3 to 5, 12 to 15, and 15 to adulthood, implying that the skull sizes of ancient children in Xinjiang continued to increase with age. In addition, the study revealed that children aged 12 to 15 had skulls that were significantly smaller than those of adults. This finding showed that the skulls of ancient children in Xinjiang were not fully developed at the age of 15. It is also important to note that differences existed between age groups in both the developmental traits of the cranium and the rate at which the skull changes

    Enhanced Chart Understanding in Vision and Language Task via Cross-modal Pre-training on Plot Table Pairs

    Full text link
    Building cross-model intelligence that can understand charts and communicate the salient information hidden behind them is an appealing challenge in the vision and language(V+L) community. The capability to uncover the underlined table data of chart figures is a critical key to automatic chart understanding. We introduce ChartT5, a V+L model that learns how to interpret table information from chart images via cross-modal pre-training on plot table pairs. Specifically, we propose two novel pre-training objectives: Masked Header Prediction (MHP) and Masked Value Prediction (MVP) to facilitate the model with different skills to interpret the table information. We have conducted extensive experiments on chart question answering and chart summarization to verify the effectiveness of the proposed pre-training strategies. In particular, on the ChartQA benchmark, our ChartT5 outperforms the state-of-the-art non-pretraining methods by over 8% performance gains.Comment: Accepted by Findings of ACL 202

    Risk of cardiovascular disease in Chinese patients with rheumatoid arthritis: a cross sectional study based on hospital medical records in 10 years

    Get PDF
    Objective: Though the risk of cardiovascular disease (CVD) in rheumatoid arthritis (RA) has been established in Western population, little is known about the risk in Chinese people with RA. Our objective was to estimate the risk of CVD in Chinese people with RA using hospital medical records data. Methods The inpatients medical record database 2005‐2015 of Sichuan provincial people’s hospital was examined. All individuals with a primary diagnosis of RA were included as cases, and those of osteoarthritis (OA) were included as controls, which consisted of the unmatched dataset. Then, RA cases and OA controls were matched by sex and age at 1:1 ratio, forming the matched dataset. The morbidity of CVD (including ischemia heart disease (IHD), congestive heart failure (CHF), et al), stroke and arthrosclerosis were extracted from the database, so as the demographic data and comorbidities related to CVD. Multiple logistic regression analysis was used to estimate the risk of CVD in RA adjusted for demographics and comorbidities using the unmatched dataset. Sensitivity analysis was conducted 1) considering interaction terms between RA and comorbidities, and 2) using multivariable conditional logistic regression for the matched dataset. Results: The unmatched data set comprised of 1824RA cases and 1995 OA controls and the matched dataset comprised of 1022 pairs of sex and age matched RA and OA patients. RA exhibited increased odds of prevalent CVD compared with OA, and the adjusted ORs (95%CIs) for CVD, stroke, IHD, CHF, and atherosclerosis were1.86(1.42‐2.43), 1.11(0.71‐1.74), 1.47(0.97‐2.24), 2.09(1.03‐4.22), and 2.49 (1.97‐3.13), respectively, and was 2.26 (1.29‐3.96) for IHD further adjusted for interaction term. The matched dataset analysis found similar results. Conclusions: Chinese people with RA were approximated 2 times more 1 likely to have CVD, IHD, CHF and atherosclerosis compared with those with OA. The findings justified the need of further longitudinal study to establish the causal‐relationship between RA and CVD and to estimate the precise risk in this population

    Non-Sequential Graph Script Induction via Multimedia Grounding

    Full text link
    Online resources such as WikiHow compile a wide range of scripts for performing everyday tasks, which can assist models in learning to reason about procedures. However, the scripts are always presented in a linear manner, which does not reflect the flexibility displayed by people executing tasks in real life. For example, in the CrossTask Dataset, 64.5% of consecutive step pairs are also observed in the reverse order, suggesting their ordering is not fixed. In addition, each step has an average of 2.56 frequent next steps, demonstrating "branching". In this paper, we propose the new challenging task of non-sequential graph script induction, aiming to capture optional and interchangeable steps in procedural planning. To automate the induction of such graph scripts for given tasks, we propose to take advantage of loosely aligned videos of people performing the tasks. In particular, we design a multimodal framework to ground procedural videos to WikiHow textual steps and thus transform each video into an observed step path on the latent ground truth graph script. This key transformation enables us to train a script knowledge model capable of both generating explicit graph scripts for learnt tasks and predicting future steps given a partial step sequence. Our best model outperforms the strongest pure text/vision baselines by 17.52% absolute gains on F1@3 for next step prediction and 13.8% absolute gains on Acc@1 for partial sequence completion. Human evaluation shows our model outperforming the WikiHow linear baseline by 48.76% absolute gains in capturing sequential and non-sequential step relationships
    • 

    corecore