558 research outputs found

    Two-sample inference for sparse functional data

    Full text link
    We propose a novel test procedure for comparing mean functions across two groups within the reproducing kernel Hilbert space (RKHS) framework. Our proposed method is adept at handling sparsely and irregularly sampled functional data when observation times are random for each subject. Conventional approaches, which are built upon functional principal components analysis, usually assume a homogeneous covariance structure across groups. Nonetheless, justifying this assumption in real-world scenarios can be challenging. To eliminate the need for a homogeneous covariance structure, we first develop the functional Bahadur representation for the mean estimator under the RKHS framework; this representation naturally leads to the desirable pointwise limiting distributions. Moreover, we establish weak convergence for the mean estimator, allowing us to construct a test statistic for the mean difference. Our method is easily implementable and outperforms some conventional tests in controlling type I errors across various settings. We demonstrate the finite sample performance of our approach through extensive simulations and two real-world applications

    A machine learning based approach to predict the stress intensity factors in 2D linear elastic fracture mechanics

    Get PDF
    Within the framework of linear elastic fracture mechanics, the stress intensity factors (SIFs) are the mostly applied crack-tip characterizing parameters. To obtain the SIFs, approximate formulae are widely used because exact analytical solutions are available only for very simple geometrical and loading configurations [1]. However, even approximate solutions for SIFs are also rather limited to very simple geometrical and loading conditions. In this work, an accurate and efficient SIF prediction model based on Physics-informed neural network (PINN) [4] is developed, where we incorporate the equilibrium equations and constitutive relations into the PINN. In order to capture the singular behavior of the stress and displacement fields around the crack tip, we extend the standard PINN structure by adding two more trainable parameter

    Structure of HIV-1 Capsid Assemblies by Cryo-electron Microscopy and Iterative Helical Real-space Reconstruction

    Get PDF
    Cryo-electron microscopy (cryo-EM), combined with image processing, is an increasingly powerful tool for structure determination of macromolecular protein complexes and assemblies. In fact, single particle electron microscopy1 and two-dimensional (2D) electron crystallography2 have become relatively routine methodologies and a large number of structures have been solved using these methods. At the same time, image processing and three-dimensional (3D) reconstruction of helical objects has rapidly developed, especially, the iterative helical real-space reconstruction (IHRSR) method3, which uses single particle analysis tools in conjunction with helical symmetry. Many biological entities function in filamentous or helical forms, including actin filaments4, microtubules5, amyloid fibers6, tobacco mosaic viruses7, and bacteria flagella8, and, because a 3D density map of a helical entity can be attained from a single projection image, compared to the many images required for 3D reconstruction of a non-helical object, with the IHRSR method, structural analysis of such flexible and disordered helical assemblies is now attainable

    Boundary integrated neural networks (BINNs) for 2D elastostatic and piezoelectric problems: Theory and MATLAB code

    Full text link
    In this paper, we make the first attempt to apply the boundary integrated neural networks (BINNs) for the numerical solution of two-dimensional (2D) elastostatic and piezoelectric problems. BINNs combine artificial neural networks with the well-established boundary integral equations (BIEs) to effectively solve partial differential equations (PDEs). The BIEs are utilized to map all the unknowns onto the boundary, after which these unknowns are approximated using artificial neural networks and resolved via a training process. In contrast to traditional neural network-based methods, the current BINNs offer several distinct advantages. First, by embedding BIEs into the learning procedure, BINNs only need to discretize the boundary of the solution domain, which can lead to a faster and more stable learning process (only the boundary conditions need to be fitted during the training). Second, the differential operator with respect to the PDEs is substituted by an integral operator, which effectively eliminates the need for additional differentiation of the neural networks (high-order derivatives of neural networks may lead to instability in learning). Third, the loss function of the BINNs only contains the residuals of the BIEs, as all the boundary conditions have been inherently incorporated within the formulation. Therefore, there is no necessity for employing any weighing functions, which are commonly used in traditional methods to balance the gradients among different objective functions. Moreover, BINNs possess the ability to tackle PDEs in unbounded domains since the integral representation remains valid for both bounded and unbounded domains. Extensive numerical experiments show that BINNs are much easier to train and usually give more accurate learning solutions as compared to traditional neural network-based methods
    • …
    corecore