6,410 research outputs found

    Quantified movement test of core muscles for Athletes

    Get PDF
    The purpose of this study was to compare the different of the core muscles ability between normal subjects and athletes of an assessment consisted of seven movement tests. Nineteen participants were voluntarily recruited in this study and divided into normal subjects (N=9, age=20.2+-0.7 y/o, weight:63.7+-11.7 kg, height:170.9+-6.7 cm) and collegiate athletes (N=10, age=19.9+-1.0 y/o, weight; 72.4+-7.8 kg, height; 172.5+-4.5 cm). The result shows that the path length of plank, bird dog with right-hand raise, bird dog with left-hand raise, right side plank, right bridge, left bridge and area of right bridge, left bridge has significant differences between two groups (Table 1). Athletes exhibit shorter path length and smaller path area in all of these data

    LightViT: Towards Light-Weight Convolution-Free Vision Transformers

    Full text link
    Vision transformers (ViTs) are usually considered to be less light-weight than convolutional neural networks (CNNs) due to the lack of inductive bias. Recent works thus resort to convolutions as a plug-and-play module and embed them in various ViT counterparts. In this paper, we argue that the convolutional kernels perform information aggregation to connect all tokens; however, they would be actually unnecessary for light-weight ViTs if this explicit aggregation could function in a more homogeneous way. Inspired by this, we present LightViT as a new family of light-weight ViTs to achieve better accuracy-efficiency balance upon the pure transformer blocks without convolution. Concretely, we introduce a global yet efficient aggregation scheme into both self-attention and feed-forward network (FFN) of ViTs, where additional learnable tokens are introduced to capture global dependencies; and bi-dimensional channel and spatial attentions are imposed over token embeddings. Experiments show that our model achieves significant improvements on image classification, object detection, and semantic segmentation tasks. For example, our LightViT-T achieves 78.7% accuracy on ImageNet with only 0.7G FLOPs, outperforming PVTv2-B0 by 8.2% while 11% faster on GPU. Code is available at https://github.com/hunto/LightViT.Comment: 13 pages, 7 figures, 9 table

    A sequential linear programming (SLP) approach for uncertainty analysis-based data-driven computational mechanics

    Full text link
    In this article, an efficient sequential linear programming algorithm (SLP) for uncertainty analysis-based data-driven computational mechanics (UA-DDCM) is presented. By assuming that the uncertain constitutive relationship embedded behind the prescribed data set can be characterized through a convex combination of the local data points, the upper and lower bounds of structural responses pertaining to the given data set, which are more valuable for making decisions in engineering design, can be found by solving a sequential of linear programming problems very efficiently. Numerical examples demonstrate the effectiveness of the proposed approach on sparse data set and its robustness with respect to the existence of noise and outliers in the data set

    3-Methyl-1-(3-nitro­phen­yl)-5-phenyl-4,5-dihydro-1H-pyrazole

    Get PDF
    In the title compound, C16H15N3O2, the planar [maximum deviation 0.156 (2) Å] pyrazoline ring is nearly coplanar with the 3-nitro­phenyl group and is approximately perpendicular to the phenyl ring, making dihedral angles of 3.80 (8) and 80.58 (10)°, respectively. Weak inter­molecular C—H⋯O hydrogen bonding is present in the crystal structure

    5-(2-Fur­yl)-3-methyl-1-(3-nitro­phen­yl)-4,5-dihydro-1H-pyrazole

    Get PDF
    In the title compound, C14H13N3O3, the pyrazoline ring assumes an envelope conformation with the furanyl-bearing C atom at the flap position. The dihedral angle between the furan and nitrobenzene rings is 84.40 (9)°. Weak inter­molecular C—H⋯O hydrogen bonding is present in the crystal structure

    LocalMamba: Visual State Space Model with Windowed Selective Scan

    Full text link
    Recent advancements in state space models, notably Mamba, have demonstrated significant progress in modeling long sequences for tasks like language understanding. Yet, their application in vision tasks has not markedly surpassed the performance of traditional Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). This paper posits that the key to enhancing Vision Mamba (ViM) lies in optimizing scan directions for sequence modeling. Traditional ViM approaches, which flatten spatial tokens, overlook the preservation of local 2D dependencies, thereby elongating the distance between adjacent tokens. We introduce a novel local scanning strategy that divides images into distinct windows, effectively capturing local dependencies while maintaining a global perspective. Additionally, acknowledging the varying preferences for scan patterns across different network layers, we propose a dynamic method to independently search for the optimal scan choices for each layer, substantially improving performance. Extensive experiments across both plain and hierarchical models underscore our approach's superiority in effectively capturing image representations. For example, our model significantly outperforms Vim-Ti by 3.1% on ImageNet with the same 1.5G FLOPs. Code is available at: https://github.com/hunto/LocalMamba

    Structural basis of water-mediated cis Watson–Crick/Hoogsteen base-pair formation in non-CpG methylation

    Get PDF
    Non-CpG methylation is associated with several cellular processes, especially neuronal development and cancer, while its effect on DNA structure remains unclear. We have determined the crystal structures of DNA duplexes containing -CGCCG- regions as CCG repeat motifs that comprise a non-CpG site with or without cytosine methylation. Crystal structure analyses have revealed that the mC:G base-pair can simultaneously form two alternative conformations arising from non-CpG methylation, including a unique water-mediated cis Watson–Crick/Hoogsteen, (w)cWH, and Watson–Crick (WC) geometries, with partial occupancies of 0.1 and 0.9, respectively. NMR studies showed that an alternative conformation of methylated mC:G base-pair at non-CpG step exhibits characteristics of cWH with a syn-guanosine conformation in solution. DNA duplexes complexed with the DNA binding drug echinomycin result in increased occupancy of the (w)cWH geometry in the methylated base-pair (from 0.1 to 0.3). Our structural results demonstrated that cytosine methylation at a non-CpG step leads to an anti→syntransition of its complementary guanosine residue toward the (w)cWH geometry as a partial population of WC, in both drug-bound and naked mC:G base pairs. This particular geometry is specific to non-CpG methylated dinucleotide sites in B-form DNA. Overall, the current study provides new insights into DNA conformation during epigenetic regulation

    SimMatchV2: Semi-Supervised Learning with Graph Consistency

    Full text link
    Semi-Supervised image classification is one of the most fundamental problem in computer vision, which significantly reduces the need for human labor. In this paper, we introduce a new semi-supervised learning algorithm - SimMatchV2, which formulates various consistency regularizations between labeled and unlabeled data from the graph perspective. In SimMatchV2, we regard the augmented view of a sample as a node, which consists of a label and its corresponding representation. Different nodes are connected with the edges, which are measured by the similarity of the node representations. Inspired by the message passing and node classification in graph theory, we propose four types of consistencies, namely 1) node-node consistency, 2) node-edge consistency, 3) edge-edge consistency, and 4) edge-node consistency. We also uncover that a simple feature normalization can reduce the gaps of the feature norm between different augmented views, significantly improving the performance of SimMatchV2. Our SimMatchV2 has been validated on multiple semi-supervised learning benchmarks. Notably, with ResNet-50 as our backbone and 300 epochs of training, SimMatchV2 achieves 71.9\% and 76.2\% Top-1 Accuracy with 1\% and 10\% labeled examples on ImageNet, which significantly outperforms the previous methods and achieves state-of-the-art performance. Code and pre-trained models are available at \href{https://github.com/mingkai-zheng/SimMatchV2}{https://github.com/mingkai-zheng/SimMatchV2}
    corecore