670 research outputs found
The Cost-effective Application of Solar Energy
Solar power is an environmental protection, clean energy. Solar power system consists of solar panels, converter, battery and inverter. The main ingredient of solar panels is Silicon. Si is one of the most abundance of material on our planet. The solar power system in this research includes three 15-watt solar panels, a 12VDC converter, battery and a 1000-watt inverter. The energy is collected through the solar panels and stored in the battery. The inverter is used to convert 12VDC into 120VAC, which can provide a variety of experiments and applications. The purpose of this study is to find a system we can apply the solar power to our daily use cost-effectively
Intelligent optical performance monitor using multi-task learning based artificial neural network
An intelligent optical performance monitor using multi-task learning based
artificial neural network (MTL-ANN) is designed for simultaneous OSNR
monitoring and modulation format identification (MFI). Signals' amplitude
histograms (AHs) after constant module algorithm are selected as the input
features for MTL-ANN. The experimental results of 20-Gbaud NRZ-OOK, PAM4 and
PAM8 signals demonstrate that MTL-ANN could achieve OSNR monitoring and MFI
simultaneously with higher accuracy and stability compared with single-task
learning based ANNs (STL-ANNs). The results show an MFI accuracy of 100% and
OSNR monitoring root-mean-square error of 0.63 dB for the three modulation
formats under consideration. Furthermore, the number of neuron needed for the
single MTL-ANN is almost the half of STL-ANN, which enables reduced-complexity
optical performance monitoring devices for real-time performance monitoring
Reduced-Order Projective Synchronization of Hyper-Chaotic L\"{U} System and Chen System
By selecting non-zero constant as a scaling factor, we design a reduced-order projective synchronization scheme for synchronizing the fourth-order hyper-chaotic L\"{u} system and the third-order chaotic Chen system. To this end, a nonlinear synchronization controller is constructed. Finally, some numerical simulations are given to illustrate the feasibility and effectiveness of the proposed synchronization scheme in this paper
The Magnetic Properties of 1111-type Diluted Magnetic Semiconductor (LaBa)(ZnMn)AsO in the Low Doping Regime
We investigated the magnetic properties of
(LaBa)(ZnMn)AsO with varying from 0.005 to 0.05
at an external magnetic field of 1000 Oe. For doping levels of 0.01,
the system remains paramagnetic down to the lowest measurable temperature of 2
K. Only when the doping level increases to = 0.02 does the ferromagnetic
ordering appear. Our analysis indicates that antiferromagnetic exchange
interactions dominate for 0.01, as shown by the negative Weiss
temperature fitted from the magnetization data. The Weiss temperature becomes
positive, i.e., ferromagnetic coupling starts to dominate, for 0.02.
The Mn-Mn spin interaction parameter is estimated to be in
the order of 10 K for both 0.01 (antiferromagnetic ordered state)
and 0.02 (ferromagnetic ordered state). Our results unequivocally
demonstrate the competition between ferromagnetic and antiferromagnetic
exchange interactions in carrier-mediated ferromagnetic systems.Comment: 9 pages, 3 figure
Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Large-scale pre-trained language models have achieved impressive results on a
wide range of downstream tasks recently. However, fine-tuning an extremely
large-scale pre-trained language model on limited target datasets is often
plagued by overfitting and representation degradation. In this paper, we
propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale
pre-trained models during fine-tuning, which adaptively selects a more
promising subnetwork to perform staging updates based on gradients of
back-propagation. Experiments on the GLUE benchmark show that DPS outperforms
previous fine-tuning methods in terms of overall performance and stability, and
consistently achieves better results with variable pre-trained language models.
In addition, DPS brings a large magnitude of improvement in out-of-domain
transferring experiments and low-resource scenarios, which shows that it can
maintain stable general contextual features and reduce the representation
collapse. We release our code at https://github.com/ZhangHaojie077/DPSComment: NeurIPS 202
- …