133 research outputs found
Irregular Convolutional Neural Networks
Convolutional kernels are basic and vital components of deep Convolutional
Neural Networks (CNN). In this paper, we equip convolutional kernels with shape
attributes to generate the deep Irregular Convolutional Neural Networks (ICNN).
Compared to traditional CNN applying regular convolutional kernels like
, our approach trains irregular kernel shapes to better fit the
geometric variations of input features. In other words, shapes are learnable
parameters in addition to weights. The kernel shapes and weights are learned
simultaneously during end-to-end training with the standard back-propagation
algorithm. Experiments for semantic segmentation are implemented to validate
the effectiveness of our proposed ICNN.Comment: 7 pages, 5 figures, 3 table
Recommended from our members
End-to-End Quantum-like Language Models with Application to Question Answering
Language Modeling (LM) is a fundamental research topic ina range of areas. Recently, inspired by quantum theory, a novel Quantum Language Model (QLM) has been proposed for Information Retrieval (IR). In this paper, we aim to broaden the theoretical and practical basis of QLM. We develop a Neural Network based Quantum-like Language Model (NNQLM) and apply it to Question Answering. Specifically, based on word embeddings, we design a new density matrix, which represents a sentence (e.g., a question or an answer) and encodes a mixture of semantic subspaces. Such a density matrix, together with a joint representation of the question and the answer, can be integrated into neural network architectures (e.g., 2-dimensional convolutional neural networks). Experiments on the TREC-QA and WIKIQA datasets have verified the effectiveness of our proposed models
Learning Efficient Convolutional Networks through Irregular Convolutional Kernels
As deep neural networks are increasingly used in applications suited for
low-power devices, a fundamental dilemma becomes apparent: the trend is to grow
models to absorb increasing data that gives rise to memory intensive; however
low-power devices are designed with very limited memory that can not store
large models. Parameters pruning is critical for deep model deployment on
low-power devices. Existing efforts mainly focus on designing highly efficient
structures or pruning redundant connections for networks. They are usually
sensitive to the tasks or relay on dedicated and expensive hashing storage
strategies. In this work, we introduce a novel approach for achieving a
lightweight model from the views of reconstructing the structure of
convolutional kernels and efficient storage. Our approach transforms a
traditional square convolution kernel to line segments, and automatically learn
a proper strategy for equipping these line segments to model diverse features.
The experimental results indicate that our approach can massively reduce the
number of parameters (pruned 69% on DenseNet-40) and calculations (pruned 59%
on DenseNet-40) while maintaining acceptable performance (only lose less than
2% accuracy)
The Influence of Training Habits on The Lower Kinematics of Junior School Freshmen (Girls)
This study is aimed to find out the influence of training habits on the lower limb (hip, knee, ankle) kinematics of junior school girl students, and compare it with the parameter of adults to find out the characteristics of the lower limb kinematics in Juvenile stage, and more desired to explore the law about it to provide the basis for physical training in juvenile stage. Thirty junior school girl students age at 13 to 14 years old participated in this study, of which 15 participants have exercise habits and 15 participants without exercise habits. The Vicon kinematics analysis system (Oxford, Metrics, Ltd., Oxford, UK) with a shooting frequency of 200Hz was used to collect the three-dimensional kinematics of the hip, knee and ankle joint. The study found that the exercise group in step and pace were higher 7.1% and 6.4% respectively than non-exercise group, but in step frequency, 7.7% lower than non-exercise group. In terms of joint angle, compared with participants without exercise habits, participants with exercise habit showed decreased angle of ankle with dorsiflexion, increased angle of the ankle with plantarflexion, and significant peak angle of plantarflexion; meantime, exerted increased angle of eversion and decreased angle inversion which were more similar to the kinematics parameters of adult women. During the push-off period, there was an obvious increase in non-exercise group’s angle of ankle with eversion, which may be one of the reasons for the phenomenon of “outer eight feet” in the juvenile. The physical parameters of the participants with exercise were more approximated to the adults’, indicating that exercise habits have positive effects on the stability of joint, such as the joint force can be better controlled, improving the walking stability, and avoiding injury
(2,4-Difluorophenyl)[1-(1H-1,2,4-triazol-1-yl)cyclopropyl]methanone
The asymmetric unit of the title compound, C12H9F2N3O, contains two independent molecules (A and B) in which the benzene and cyclopropane rings form dihedral angles of 33.0 (1) and 29.7 (1)°, respectively. In the crystal, weak intermolecular C—H⋯O hydrogen bonds link alternating A and B molecules into chains along [010]
Heterogeneous Vancomycin-Intermediate Staphylococcus aureus Uses the VraSR Regulatory System to Modulate Autophagy for Increased Intracellular Survival in Macrophage-Like Cell Line RAW264.7
The VraSR two-component system is a vancomycin resistance-associated sensor/regulator that is upregulated in vancomycin-intermediate Staphylococcus aureus (VISA) and heterogeneous VISA (hVISA) strains. VISA/hVISA show reduced susceptibility to vancomycin and an increased ability to evade host immune responses, resulting in enhanced clinical persistence. However, the underlying mechanism remains unclear. Recent studies have reported that S. aureus strains have developed some strategies to survive within the host cell by using autophagy processes. In this study, we confirmed that clinical isolates with high vraR expression showed increased survival in murine macrophage-like RAW264.7 cells. We constructed isogenic vraSR deletion strain Mu3ΔvraSR and vraSR-complemented strain Mu3ΔvraSR-C to ascertain whether S. aureus uses the VraSR system to modulate autophagy for increasing intracellular survival in RAW264.7. Overall, the survival of Mu3ΔvraSR in RAW264.7 cells was reduced at all infection time points compared with that of the Mu3 wild-type strain. Mu3ΔvraSR-infected RAW264.7 cells also showed decreased transcription of autophagy-related genes Becn1 and Atg5, decreased LC3-II turnover and increased p62 degradation, and fewer visible punctate LC3 structures. In addition, we found that inhibition of autophagic flux significantly increased the survival of Mu3ΔvraSR in RAW264.7 cells. Together, these results demonstrate that S. aureus uses the VraSR system to modulate host-cell autophagy processes for increasing its own survival within macrophages. Our study provides novel insights into the impact of VraSR on bacterial infection and will help to further elucidate the relationship between bacteria and the host immune response. Moreover, understanding the autophagic pathway in vraSR associated immunity has potentially important implications for preventing or treating VISA/hVISA infection
Reducing the gap between streaming and non-streaming Transducer-based ASR by adaptive two-stage knowledge distillation
Transducer is one of the mainstream frameworks for streaming speech
recognition. There is a performance gap between the streaming and non-streaming
transducer models due to limited context. To reduce this gap, an effective way
is to ensure that their hidden and output distributions are consistent, which
can be achieved by hierarchical knowledge distillation. However, it is
difficult to ensure the distribution consistency simultaneously because the
learning of the output distribution depends on the hidden one. In this paper,
we propose an adaptive two-stage knowledge distillation method consisting of
hidden layer learning and output layer learning. In the former stage, we learn
hidden representation with full context by applying mean square error loss
function. In the latter stage, we design a power transformation based adaptive
smoothness method to learn stable output distribution. It achieved 19\%
relative reduction in word error rate, and a faster response for the first
token compared with the original streaming model in LibriSpeech corpus
- …