3,527 research outputs found

    Accuracy of the mean-field theory in describing ground-state properties of light nuclei

    Full text link
    The relativistic mean-field model, augmented with three types of center-of-mass corrections and two types of rotational corrections, is employed to investigate the ground-state properties of helium, beryllium, and carbon isotopes. The efficacy of the mean-field approach in describing the binding energies, quadrupole deformations, root-mean-square charge radii, root-mean-square matter radii, and neutron skins of these light nuclei is evaluated. By averaging the binding energies obtained from six selected effective interactions, a mass-dependent behavior of the mean-field approximation is elucidated. The findings from radii reveal that, unlike in heavy nuclei, the exchange terms of the center-of-mass correction play an indispensable role in accurately describing the radii of light nuclei. The mean-field approximation, when augmented with center-of-mass and rotational corrections, effectively reproduces the energies and radii of light nuclei. However, due to the absence of many-body correlations between valence neutrons, the mean-field approximation falls short in describing the deformations and shell evolutions of the helium and beryllium isotopic chains.Comment: 14 pages, 6 figure

    Development of Vibration and Sensitivity Analysis Capability Using the Theory of Structural Variations

    Get PDF
    In the author\u27s previous work entitled General Theorems of Topological Variations of Elastic Structures and the Method of Topological Variation, 1985, some interesting properties of skeletal structures have been discovered. These properties have been described as five theorems and synthesized as a theory, called the theory of structural variations (TSV). Based upon this theory, an innovative analysis tool, called the structural variation method (SVM), has been derived for static analysis of skeletal structures (one-dimensional finite element systems). The objective of this dissertation research is to extend TSV and SVM from one-dimensional finite element systems to multi-dimensional ones and from statics to vibration and sensitivity analysis. Meanwhile, four new interesting and useful properties of finite element systems are also revealed. One of them is stated as the Gradient Orthogonality Theorem of Basic Displacements, based upon which a set of explicit formulations are derived for design sensitivities of displacements, internal forces, stresses and even the inverse of the global stiffness matrix of a statically loaded structure. The other three new properties are described as the Evaluation Theorem of Principal Z-Deformations, the Monotonousness Theorem of Principal Z-Deformations and the Equivalence Theorem of Basic Displacement Vectors and Eigenvectors, based upon which a new approach, called the Z-deformation method, is developed for vibration analysis of finite element systems. This method is superior to the commonly used inverse power iteration method when adjacent eigenvalues are close. Explicit formulations for eigenpair sensitivities are also derived in accordance with the Z-deformation method. The distinct feature of TSV and SVM is that the analysis results for a loaded structure can be obtained without any matrix assembling and inverse operations. This feature gives TSV and SVM an edge over the traditional finite element analysis in many engineering applications, where the repeated analysis is required, such as structural optimization, reliability analysis, elastic-plastic analysis, vibration, contact problems, crack propagation in solids

    Neural Discrete Representation Learning

    Full text link
    Learning useful representations without supervision remains a key challenge in machine learning. In this paper, we propose a simple yet powerful generative model that learns such discrete representations. Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete, rather than continuous, codes; and the prior is learnt rather than static. In order to learn a discrete latent representation, we incorporate ideas from vector quantisation (VQ). Using the VQ method allows the model to circumvent issues of "posterior collapse" -- where the latents are ignored when they are paired with a powerful autoregressive decoder -- typically observed in the VAE framework. Pairing these representations with an autoregressive prior, the model can generate high quality images, videos, and speech as well as doing high quality speaker conversion and unsupervised learning of phonemes, providing further evidence of the utility of the learnt representations

    Tikhonov regularized second-order plus first-order primal-dual dynamical systems with asymptotically vanishing damping for linear equality constrained convex optimization problems

    Full text link
    In this paper, in the setting of Hilbert spaces, we consider a Tikhonov regularized second-order plus first-order primal-dual dynamical system with asymptotically vanishing damping for a linear equality constrained convex optimization problem. The convergence properties of the proposed dynamical system depend heavily upon the choice of the Tikhonov regularization parameter. When the Tikhonov regularization parameter decreases rapidly to zero, we establish the fast convergence rates of the primal-dual gap, the objective function error, the feasibility measure, and the gradient norm of the objective function along the trajectory generated by the system. When the Tikhonov regularization parameter tends slowly to zero, we prove that the primal trajectory of the Tikhonov regularized dynamical system converges strongly to the minimal norm solution of the linear equality constrained convex optimization problem. Numerical experiments are performed to illustrate the efficiency of our approach

    Long-term Blood Pressure Prediction with Deep Recurrent Neural Networks

    Full text link
    Existing methods for arterial blood pressure (BP) estimation directly map the input physiological signals to output BP values without explicitly modeling the underlying temporal dependencies in BP dynamics. As a result, these models suffer from accuracy decay over a long time and thus require frequent calibration. In this work, we address this issue by formulating BP estimation as a sequence prediction problem in which both the input and target are temporal sequences. We propose a novel deep recurrent neural network (RNN) consisting of multilayered Long Short-Term Memory (LSTM) networks, which are incorporated with (1) a bidirectional structure to access larger-scale context information of input sequence, and (2) residual connections to allow gradients in deep RNN to propagate more effectively. The proposed deep RNN model was tested on a static BP dataset, and it achieved root mean square error (RMSE) of 3.90 and 2.66 mmHg for systolic BP (SBP) and diastolic BP (DBP) prediction respectively, surpassing the accuracy of traditional BP prediction models. On a multi-day BP dataset, the deep RNN achieved RMSE of 3.84, 5.25, 5.80 and 5.81 mmHg for the 1st day, 2nd day, 4th day and 6th month after the 1st day SBP prediction, and 1.80, 4.78, 5.0, 5.21 mmHg for corresponding DBP prediction, respectively, which outperforms all previous models with notable improvement. The experimental results suggest that modeling the temporal dependencies in BP dynamics significantly improves the long-term BP prediction accuracy.Comment: To appear in IEEE BHI 201
    • …
    corecore