138 research outputs found

    Error Analysis of the Cholesky QR-Based Block Orthogonalization Process for the One-Sided Block Jacobi SVD Algorithm

    Get PDF
    The one-sided block Jacobi method (OSBJ) has attracted attention as a fast and accurate algorithm for the singular value decomposition (SVD). The computational kernel of OSBJ is orthogonalization of a column block pair, which amounts to computing the SVD of this block pair. Hari proposes three methods for this partial SVD, and we found through numerical experiments that the variant named "V2", which is based on the Cholesky QR method, is the fastest variant and achieves satisfactory accuracy. While it is a good news from a practical viewpoint, it seems strange considering the well-known instability of the Cholesky QR method. In this paper, we perform a detailed error analysis of the V2 variant and explain why and when it can be used to compute the partial SVD accurately. Thus, our results provide a theoretical support for using the V2 variant safely in the OSBJ method

    Continuous data assimilation of large eddy simulation by lattice Boltzmann method and local ensemble transform Kalman filter (LBM-LETKF)

    Full text link
    We investigate the applicability of the data assimilation (DA) to large eddy simulations (LESs) based on the lattice Boltzmann method (LBM). We carry out the observing system simulation experiment of a two-dimensional (2D) forced isotropic turbulence, and examine the DA accuracy of the nudging and the local ensemble transform Kalman filter (LETKF) with spatially sparse and noisy observation data of flow fields. The advantage of the LETKF is that it does not require computing spatial interpolation and/or an inverse problem between the macroscopic variables (the density and the pressure) and the velocity distribution function of the LBM, while the nudging introduces additional models for them. The numerical experiments with 256×256256\times256 grids and 10%10\% observation noise in the velocity showed that the root mean square error of the velocity in the LETKF with 8×88\times 8 observation points (0.1%\sim 0.1\% of the total grids) and 64 ensemble members becomes smaller than the observation noise, while the nudging requires an order of magnitude larger number of observation points to achieve the same accuracy. Another advantage of the LETKF is that it well keeps the amplitude of the energy spectrum, while only the phase error becomes larger with more sparse observation. We also see that a lack of observation data in the LETKF produces a spurious energy injection in high wavenumber regimes, leading to numerical instability. Such numerical instability is known as the catastrophic filter divergence problem, which can be suppressed by increasing the number of ensemble members. From these results, it was shown that the LETKF enables robust and accurate DA for the 2D LBM with sparse and noisy observation data.Comment: 27 pages, 14 figure

    Initial Cardiopulmonary Response to Exercise in Chronic Obstructive Pulmonary Diseases (COPD)

    Get PDF
    The present study was undertaken to assess the cardiopulmonary response during the initial period of exercise at a low workload in 8 patients with COPD and 10 normal subjects. In the patients with COPD VO2/VE and VCO2/VE were significantly lower than in the normal controls, and more markedly so during the initial period of exercise. SaO2 and Sv O2 decreased dramatically in the initial period of exercise in the COPD compared with the normal subjects. In contrast to the normal subjects, pulmonary artery mean pressure (PAMP) increased substantially during the initial period of exercise in the patients with COPD. These findings imply that blood gas changes on exercise can be explained by the differences in the relative increase of VO2, VCO2, VE and cardiac output. Our study also suggests that the measurement of VO2/VE, VCO2/VE and SvO2 and PAMP on exercise at a low workload, especially during the initial period, may be useful for evaluating the cardiopulmonary response to COPD patients

    A study of a karate trial teaching class in a teacher training course − based on students’ formative assessment

    Get PDF
    [EN] The purpose of this study was to examine the effectiveness of a karate trial teaching class in an initial teacher training course, through students’ formative assessment. It involved two case studies of trial teaching classes of karate and that of two other activities, taught by the students of an initial teacher training course. The results were assessed using the Students’ Formative Assessment of Physical Education (P.E.) Classes scale. Results showed significant differences between groups in “New discovery” (p<.05) and a trend toward statistical significance in “Skill growth”, “Fun Exercise” and “Learning friendly” (p<.10) based on the classes provided by karate and other teaching materials. This implies that karate might have different acute effects on students’ learning process in the context of school-level physical education

    CORONARY ARTERY MORPHPLOGY AND REACTIVITY TO ACUTE HYPOXIA IN CHRONIC PULMONARY DISEASE

    Get PDF
    In patients with chronic pulmonary disase (CPD), myocardial infarction is rare. To elucidate why this is so, we investigated the morphological changes and the reactivity of the coronary artery to acute hypoxia in patients with CPD. Sixty patients with CPD and 28 normal subjects were studied. Measurements of pulmonary homodynamics and coronary angiography were undertaken before and after inhalation of 13%O2 for 15 minutes. The size of the coronary arteries was measured using a densitometric method, and a coronary narrowing score was calculated according to the WHO criteria. The size of the left anterior descending artery of patients with low %VC and hypoxia was larger than that of the normal subjects. In patients with CPD, the coronary narrowing score was low and the atherosclerotic change was minimal. The reactivity of the coronary arteries to acute hypoxia was reduced in patients with CPD when compared with normal subjects

    Measurement of femoral axial offset

    Get PDF
    Purpose to examine the accuracy and reproducibility of the femoral axial offset measured from the retrocondylar plane by computed tomography (CT). Bone specimens of the femur of 15 males and 15 females were analyzed. CT imaging was performed and data of the coordinates were collected (center of femoral head, center of an ellipse around greater trochanter, center of an ellipse around the base of femoral neck, posterior edge of great trochanter, and both posterior condyles). The angle between the line connecting center of the femoral head and center of an ellipse around greater trochanter and the line connecting both posterior condyles was set as anteversion 1. The angle between the line connecting the center of femoral head and center of an ellipse around base of the femoral neck and the line connecting both posterior condyles was set as anteversion 2. The femoral axial offset was measured from the retrocondylar plane. Measurements were performed three times on the same subject, and intrarater reliability (ICC) was determined. In addition, interrater reliability (ICC) was determined by comparing data from three raters. The mean value for anteversion 1 was 20.1° for males and 22.7° for females. The values for anteversion 2 were 16.0° and 19.9° for males and females, respectively. Offset was 34.0 and 33.4 mm in males and females, respectively. Intrarater ICC and interrater ICC exceeded 0.81 for both methods, suggesting that the method of measurement was reliable. Accuracy and reproducibility of the measurement of femoral axial offset from the retrocondylar plane were high

    Periarticular Osteophytes as an Appendicular Joint Stress Marker (JSM): Analysis in a Contemporary Japanese Skeletal Collection

    Get PDF
    Objective: The aim of this study was to investigate the possibility that periarticular osteophytes plays a role as a appendicular joint stress marker (JSM) which reflects the biomechanical stresses on individuals and populations. Methods: A total of 366 contemporary Japanese skeletons (231 males, 135 females) were examined closely to evaluate the periarticular osteophytes of six major joints, the shoulder, elbow, wrist, hip, knee, and ankle and osteophyte scores (OS) were determined using an original grading system. These scores were aggregated and analyzed statistically from some viewpoints. Results: All of the OS for the respective joints were correlated logarithmically with the age-at-death of the individuals. For 70 individuals, in whom both sides of all six joints were evaluated without missing values, the age-standardized OS were calculated. A right side dominancy was recognized in the joints of the upper extremities, shoulder and wrist joints, and the bilateral correlations were large in the three joints on the lower extremity. For the shoulder joint and the hip joint, it was inferred by some distinctions that systemic factors were relatively large. All of these six joints could be assorted by the extent of systemic and local factors on osteophytes formation. Moreover, when the age-standardized OS of all the joints was summed up, some individuals had significantly high total scores, and others had significantly low total scores; namely, all of the individuals varied greatly in their systemic predisposition for osteophytes formation. Conclusions: This study demonstrated the significance of periarticular osteophytes; the evaluating system for OS could be used to detect differences among joints and individuals. Periarticular osteophytes could be applied as an appendicular joint stress marker (JSM); by applying OS evaluating system for skeletal populations, intra-skeletal and inter-skeletal variations in biomechanical stresses throughout the lives could be clarified

    Brain Dp140 alters glutamatergic transmission and social behaviour in the mdx52 mouse model of Duchenne muscular dystrophy

    Get PDF
    Duchenne muscular dystrophy (DMD) is a muscle disorder caused by DMD mutations and is characterized by neurobehavioural comorbidities due to dystrophin deficiency in the brain. The lack of Dp140, a dystrophin short isoform, is clinically associated with intellectual disability and autism spectrum disorders (ASDs), but its postnatal functional role is not well understood. To investigate synaptic function in the presence or absence of brain Dp140, we utilized two DMD mouse models, mdx23 and mdx52 mice, in which Dp140 is preserved or lacking, respectively. ASD-like behaviours were observed in pups and 8-week-old mdx52 mice lacking Dp140. Paired-pulse ratio of excitatory postsynaptic currents, glutamatergic vesicle number in basolateral amygdala neurons, and glutamatergic transmission in medial prefrontal cortex-basolateral amygdala projections were significantly reduced in mdx52 mice compared to those in wild-type and mdx23 mice. ASD-like behaviour and electrophysiological findings in mdx52 mice were ameliorated by restoration of Dp140 following intra-cerebroventricular injection of antisense oligonucleotide drug-induced exon 53 skipping or intra-basolateral amygdala administration of Dp140 mRNA-based drug. Our results implicate Dp140 in ASD-like behaviour via altered glutamatergic transmission in the basolateral amygdala of mdx52 mice

    White Paper from Workshop on Large-scale Parallel Numerical Computing Technology (LSPANC 2020): HPC and Computer Arithmetic toward Minimal-Precision Computing

    Full text link
    In numerical computations, precision of floating-point computations is a key factor to determine the performance (speed and energy-efficiency) as well as the reliability (accuracy and reproducibility). However, precision generally plays a contrary role for both. Therefore, the ultimate concept for maximizing both at the same time is the minimal-precision computing through precision-tuning, which adjusts the optimal precision for each operation and data. Several studies have been already conducted for it so far (e.g. Precimoniuos and Verrou), but the scope of those studies is limited to the precision-tuning alone. Hence, we aim to propose a broader concept of the minimal-precision computing system with precision-tuning, involving both hardware and software stack. In 2019, we have started the Minimal-Precision Computing project to propose a more broad concept of the minimal-precision computing system with precision-tuning, involving both hardware and software stack. Specifically, our system combines (1) a precision-tuning method based on Discrete Stochastic Arithmetic (DSA), (2) arbitrary-precision arithmetic libraries, (3) fast and accurate numerical libraries, and (4) Field-Programmable Gate Array (FPGA) with High-Level Synthesis (HLS). In this white paper, we aim to provide an overview of various technologies related to minimal- and mixed-precision, to outline the future direction of the project, as well as to discuss current challenges together with our project members and guest speakers at the LSPANC 2020 workshop; https://www.r-ccs.riken.jp/labs/lpnctrt/lspanc2020jan/
    corecore