22 research outputs found
Comparison of value-added models for school ranking and classification: a Monte Carlo study
A “Value-Added” definition of school effectiveness calls for the evaluation of schools based on the unique contribution of schools to individual student academic growth. The estimates of value-added school effectiveness are usually used for ranking and classifying schools. The current simulation study examined and compared the validity of school effectiveness estimates in four statistical models for school ranking and classification. The simulation study was conducted under two sample size conditions and the situations typical in school effectiveness research. The Conditional Cross-Classified Model (CCCM) was used to simulate data. The findings indicated that the gain score model adjusting for students’ test scores at the end of kindergarten (i. e., prior entering to an elementary school) (Gain_kindergarten) could validly rank and classify schools. Other models, including the gain score model adjusting for students’ test scores at the end of Grade 4 (i. e., one year before estimating the school effectiveness in Grade 5) (Gain_grade4), the Unconditional Cross-Classified Model (UCCM), and the Layered Mixed Effect Model (LMEM), could not validly rank or classify schools. The failure of the UCCM model in school ranking and classification indicated that ignoring covariates would distort school rankings and classifications if no other analytical remedies were applied. The failure of the LMEM model in school ranking and classification indicated that estimation of correlations among repeated measures could not alleviate the damage caused by the omitted covariates. The failure of the Gain_grade4 model cautioned against adjustment using the test scores of the previous year. The success of the Gain_kindergarten model indicated that under some circumstances, it was possible to achieve valid school rankings and classifications with only two time points of data
Light-Reinforced Key Intermediate for Anticoking To Boost Highly Durable Methane Dry Reforming over Single Atom Ni Active Sites on CeO<sub>2</sub>.
Dry reforming of methane (DRM) has been investigated for more than a century; the paramount stumbling block in its industrial application is the inevitable sintering of catalysts and excessive carbon emissions at high temperatures. However, the low-temperature DRM process still suffered from poor reactivity and severe catalyst deactivation from coking. Herein, we proposed a concept that highly durable DRM could be achieved at low temperatures via fabricating the active site integration with light irradiation. The active sites with Ni-O coordination (NiSA/CeO2) and Ni-Ni coordination (NiNP/CeO2) on CeO2, respectively, were successfully constructed to obtain two targeted reaction paths that produced the key intermediate (CH3O*) for anticoking during DRM. In particular, the operando diffuse reflectance infrared Fourier transform spectroscopy coupling with steady-state isotopic transient kinetic analysis (operando DRIFTS-SSITKA) was utilized and successfully tracked the anticoking paths during the DRM process. It was found that the path from CH3* to CH3O* over NiSA/CeO2 was the key path for anticoking. Furthermore, the targeted reaction path from CH3* to CH3O* was reinforced by light irradiation during the DRM process. Hence, the NiSA/CeO2 catalyst exhibits excellent stability with negligible carbon deposition for 230 h under thermo-photo catalytic DRM at a low temperature of 472 °C, while NiNP/CeO2 shows apparent coke deposition behavior after 0.5 h in solely thermal-driven DRM. The findings are vital as they provide critical insights into the simultaneous achievement of low-temperature and anticoking DRM process through distinguishing and directionally regulating the key intermediate species
DRA-Net: Medical image segmentation based on adaptive feature extraction and region-level information fusion
Abstract Medical image segmentation is a key task in computer aided diagnosis. In recent years, convolutional neural network (CNN) has made some achievements in medical image segmentation. However, the convolution operation can only extract features in a fixed size region at a time, which leads to the loss of some key features. The recently popular Transformer has global modeling capabilities, but it does not pay enough attention to local information and cannot accurately segment the edge details of the target area. Given these issues, we proposed dynamic regional attention network (DRA-Net). Different from the above methods, it first measures the similarity of features and concentrates attention on different dynamic regions. In this way, the network can adaptively select different modeling scopes for feature extraction, reducing information loss. Then, regional feature interaction is carried out to better learn local edge details. At the same time, we also design ordered shift multilayer perceptron (MLP) blocks to enhance communication within different regions, further enhancing the network’s ability to learn local edge details. After several experiments, the results indicate that our network produces more accurate segmentation performance compared to other CNN and Transformer based networks
The Analytical and Quick Computation Method of Disturbing Gravity in Global and Local Ocean Area
In order to solve the problem of deriving the disturbing gravity from satellite altimetry data, the analytical formula of disturbing gravity computed from geoid and vertical deflection are derived. The analytical formula can be used to get global ocean disturbing gravity by using altimetry data. Considering the existing achievements, the two improved quick computation methods which are respectively according to the global and local area are also get based on the one dimensional FFT algorithm. The quick computation methods can get the same results as the analytical computation and improve the computation speed of 20 times. The precise, quick computation methods can avoid the problem of aliasing and edge effects and have the flexible application. The 2.5' resolution of geoid and vertical deflection derived from EGM2008 model are used to compute the global and local ocean disturbing gravity. The results show that the difference between two method is about 0.8×10<sup>-5</sup> m/s<sup>2</sup>, so the disturbing gravity respectively derived from geoid and vertical deflection are consistent. Considering the actual situation, the disturbing gravity derived from vertical deflection still has some advantages
Improving Performance in Person Reidentification Using Adaptive Multiple Loss Baseline
Currently, deep learning is the mainstream method to solve the problem of person reidentification. With the rapid development of neural networks in recent years, a number of neural network frameworks have emerged for it, so it is becoming more important to explore a simple and efficient baseline algorithm. In fact, the performance of the same module varies greatly in different positions of the network architecture. After exploring how modules can play a maximum role in the network and studying and summarizing existing algorithms, we designed an adaptive multiple loss baseline (AML) with a simple structure but powerful functions. In this network, we use an adaptive mining sample loss (AMS) and other modules, which can mine more information from input samples at the same time. Based on triplet loss, AMS loss can optimize the distance between the input sample and its positive and negative samples and protect structural information within the sample. During the experiment, we conducted several group tests and confirmed the high performance of AML baseline via the results. AML baseline has outstanding performance in three commonly used datasets. The two indicators of AML baseline on CUHK-03 are 25.7% and 26.8% higher than BagTricks
Improving Performance in Person Reidentification Using Adaptive Multiple Loss Baseline
Currently, deep learning is the mainstream method to solve the problem of person reidentification. With the rapid development of neural networks in recent years, a number of neural network frameworks have emerged for it, so it is becoming more important to explore a simple and efficient baseline algorithm. In fact, the performance of the same module varies greatly in different positions of the network architecture. After exploring how modules can play a maximum role in the network and studying and summarizing existing algorithms, we designed an adaptive multiple loss baseline (AML) with a simple structure but powerful functions. In this network, we use an adaptive mining sample loss (AMS) and other modules, which can mine more information from input samples at the same time. Based on triplet loss, AMS loss can optimize the distance between the input sample and its positive and negative samples and protect structural information within the sample. During the experiment, we conducted several group tests and confirmed the high performance of AML baseline via the results. AML baseline has outstanding performance in three commonly used datasets. The two indicators of AML baseline on CUHK-03 are 25.7% and 26.8% higher than BagTricks
Predicting bathymetry based on vertical gravity gradient anomaly and analyses for various influential factors
The prediction of bathymetry has advanced significantly with the development of satellite altimetry. However, the majority of its data originate from marine gravity anomaly. In this study, based on the expression of vertical gravity gradient (VGG) of a rectangular prism, the governing equations for determining sea depths to invert bathymetry. The governing equation is solved by linearization through an iterative process, and numerical simulations verify its algorithm and its stability. We also study the processing methods of different interference errors. The regularization method improves the stability of the inversion process for errors. A piecewise bilinear interpolation function roughly replaces the low-frequency error, and numerical simulations show that the accuracy can be improved by 41.2Â % after this treatment. For variable ocean crust density, simulation simulations verify that the root-mean-square (RMS) error of prediction is approximately 5Â m for the sea depth of 6Â km if density is chosen as the average one. Finally, two test regions in the South China Sea are predicted and compared with ship soundings data, RMS errors of predictions are 71.1Â m and 91.4Â m, respectively
An Iterative Algorithm for Predicting Seafloor Topography from Gravity Anomalies
As high-resolution global coverage cannot easily be achieved by direct bathymetry, the use of gravity data is an alternative method to predict seafloor topography. Currently, the commonly used algorithms for predicting seafloor topography are mainly based on the approximate linear relationship between topography and gravity anomaly. In actual application, it is also necessary to process the corresponding data according to some empirical methods, which can cause uncertainty in predicting topography. In this paper, we established analytical observation equations between the gravity anomaly and topography, and obtained the corresponding iterative solving method based on the least square method after linearizing the equations. Furthermore, the regularization method and piecewise bilinear interpolation function are introduced into the observation equations to effectively suppress the high-frequency effect of the boundary sea region and the low-frequency effect of the far sea region. Finally, the seafloor topography beneath a sea region (117.25°–118.25°E, 13.85°–14.85°N) in the South China Sea is predicted as an actual application, where gravity anomaly data of the study area with a resolution of 1′ × 1′ are from the DTU17 model. Comparing the prediction results with the data of ship soundings from the National Geophysical Data Center (NGDC), the root-mean-square (RMS) error and relative error can be up to 127.4 m and approximately 3.4%, respectively
Comparative Study on Predicting Topography from Gravity Anomaly and Gravity Gradient Anomaly
Owing to the dependence of algorithms on the measurement of ship soundings and geophysical parameters, the accuracy and coverage of topography still need to be improved. Previous studies have mostly predicted topography using gravity or gravity gradient, However, there is a relative lack of integrated research combining or comparing gravity and gravity gradient. In this study, we develop observation equations to predict topography based on vertical gravity anomalies (VG; also called gravity anomalies) and vertical gravity gradient (VGG) anomalies generated by a rectangular prism. The sources of interference are divided into medium- to high-frequency errors and low-frequency errors, and these new methods reduce these errors through regularization and error equations. We also use numerical simulations to test the efficiency of the algorithm and error-reduction method. Statistics show that VGG anomalies are more sensitive to topographic fluctuations; however, the linear correlation between VG anomalies and topography is stronger. Additionally, we use the EIGEN-6C4 model of VG and VGG anomalies to predict topography in shallow and deep-sea areas, with maximum depths of 2 km and 5 km, respectively. In the shallow and deep-sea areas, the root mean square (RMS) errors of VGG anomalies prediction are 93.8 m and 233.8 m, and the corresponding accuracies improved by 7.3% and 2.3% compared with those of VG anomaly prediction, respectively. Furthermore, we use cubic spline interpolation to fuse ship soundings and improve the accuracy of the final topography results. We develop a novel analytical algorithm by constructing an observation equation system applicable to VG and VGG anomalies. This will provide new insights and directions to refine topography prediction based on VG and VGG anomalies.</p