2,156 research outputs found

    Near-wall velocity of suspended particles in microchannel flow

    Get PDF
    This contribution investigates the characteristic reduction of the particle velocity with respect to the velocity profile of a pure liquid (water) in a pressure driven flow (PDF). It is shown by simulations and experiments that particles are slowed down once their local perturbation "cloud" of the velocity field hits the wall. We show that this effect scales with the ratio of the distance of sphere's surface from the wall, a, and the radius, a, of the sphere, i.e. delta/a

    Linear scaling calculation of maximally-localized Wannier functions with atomic basis set

    Full text link
    We have developed a linear scaling algorithm for calculating maximally-localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.Comment: J. Chem. Phys. in press (2006

    On the radiation force fields of fractional-order acoustic vortices

    Get PDF
    Here we report the creation and observation of acoustic vortices of fractional order. Whilst integer orders are known to produce axisymmetric acoustic fields, fractional orders are shown to break this symmetry and produce a vast array of unexplored field patterns, typically exhibiting multiple closely spaced phase singularities. Here, fractional acoustic vortices are created by emitting ultrasonic waves from an annular array of sources using multiple ramps of phase delay around its circumference. Acoustic radiation force patterns, including multiple concentration points, short straight lines, triangles, squares and discontinuous circles are simulated and experimentally observed. The fractional acoustic vortex leading to two closely spaced phase singularities is used to trap, and by controlling the order, reversibly manipulate two microparticles to a proximity of 0.3 acoustic wavelengths

    Tension–Torsion Fracture Experiments – Part II: Simulations with the Extended Gurson Model and a Ductile Fracture Criterion Based on Plastic Strain

    Get PDF
    An extension of the Gurson model that incorporates damage development in shear is used to simulate the tension-torsion test fracture data presented in Faleskog and Barsoum (2012) (Part I) for two steels, Weldox 420 and 960. Two parameters characterize damage in the constitutive model: the effective void volume fraction and a shear damage coefficient. For each of the steels, the initial effective void volume fraction is calibrated against data for fracture of notched round tensile bars and the shear damage coefficient is calibrated against fracture in shear. The calibrated constitutive model reproduces the full range of data in the tension-torsion tests thereby providing a convincing demonstration of the effectiveness of the extended Gurson model. The model reinforces the experiments by highlighting that for ductile alloys the effective plastic strain at fracture cannot be based solely on stress triaxiality. For nominally isotropic alloys, a ductile fracture criterion is proposed for engineering purposes that depends on stress triaxiality and a second stress invariant that discriminates between axisymmetric stressing and shear dominated stressing.Engineering and Applied Science

    Sparse Quantized Spectral Clustering

    Full text link
    Given a large data matrix, sparsifying, quantizing, and/or performing other entry-wise nonlinear operations can have numerous benefits, ranging from speeding up iterative algorithms for core numerical linear algebra problems to providing nonlinear filters to design state-of-the-art neural network models. Here, we exploit tools from random matrix theory to make precise statements about how the eigenspectrum of a matrix changes under such nonlinear transformations. In particular, we show that very little change occurs in the informative eigenstructure even under drastic sparsification/quantization, and consequently that very little downstream performance loss occurs with very aggressively sparsified or quantized spectral clustering. We illustrate how these results depend on the nonlinearity, we characterize a phase transition beyond which spectral clustering becomes possible, and we show when such nonlinear transformations can introduce spurious non-informative eigenvectors

    Precise expressions for random projections: Low-rank approximation and randomized Newton

    Full text link
    It is often desirable to reduce the dimensionality of a large dataset by projecting it onto a low-dimensional subspace. Matrix sketching has emerged as a powerful technique for performing such dimensionality reduction very efficiently. Even though there is an extensive literature on the worst-case performance of sketching, existing guarantees are typically very different from what is observed in practice. We exploit recent developments in the spectral analysis of random matrices to develop novel techniques that provide provably accurate expressions for the expected value of random projection matrices obtained via sketching. These expressions can be used to characterize the performance of dimensionality reduction in a variety of common machine learning tasks, ranging from low-rank approximation to iterative stochastic optimization. Our results apply to several popular sketching methods, including Gaussian and Rademacher sketches, and they enable precise analysis of these methods in terms of spectral properties of the data. Empirical results show that the expressions we derive reflect the practical performance of these sketching methods, down to lower-order effects and even constant factors.Comment: Minor corrections and clarifications of the previous version, including additional discussion in Appendix A.

    Direct extraction of the Eliashberg function for electron-phonon coupling: A case study of Be(1010)

    Get PDF
    We propose a systematic procedure to directly extract the Eliashberg function for electron-phonon coupling from high-resolution angle-resolved photoemission data. The procedure is successfully applied to the Be(1010) surface, providing new insights to electron-phonon coupling at this surface. The method is shown to be robust against imperfections in experimental data and suitable for wider applications.Comment: 4 pages, 4 figures. More details concerning the procedure are include

    A study on the impact of pre-trained model on Just-In-Time defect prediction

    Full text link
    Previous researchers conducting Just-In-Time (JIT) defect prediction tasks have primarily focused on the performance of individual pre-trained models, without exploring the relationship between different pre-trained models as backbones. In this study, we build six models: RoBERTaJIT, CodeBERTJIT, BARTJIT, PLBARTJIT, GPT2JIT, and CodeGPTJIT, each with a distinct pre-trained model as its backbone. We systematically explore the differences and connections between these models. Specifically, we investigate the performance of the models when using Commit code and Commit message as inputs, as well as the relationship between training efficiency and model distribution among these six models. Additionally, we conduct an ablation experiment to explore the sensitivity of each model to inputs. Furthermore, we investigate how the models perform in zero-shot and few-shot scenarios. Our findings indicate that each model based on different backbones shows improvements, and when the backbone's pre-training model is similar, the training resources that need to be consumed are much more closer. We also observe that Commit code plays a significant role in defect detection, and different pre-trained models demonstrate better defect detection ability with a balanced dataset under few-shot scenarios. These results provide new insights for optimizing JIT defect prediction tasks using pre-trained models and highlight the factors that require more attention when constructing such models. Additionally, CodeGPTJIT and GPT2JIT achieved better performance than DeepJIT and CC2Vec on the two datasets respectively under 2000 training samples. These findings emphasize the effectiveness of transformer-based pre-trained models in JIT defect prediction tasks, especially in scenarios with limited training data
    • 

    corecore