46 research outputs found

    Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video Quality Assessment

    Full text link
    Video quality assessment (VQA) has attracted growing attention in recent years. While the great expense of annotating large-scale VQA datasets has become the main obstacle for current deep-learning methods. To surmount the constraint of insufficient training data, in this paper, we first consider the complete range of video distribution diversity (\ie content, distortion, motion) and employ diverse pretrained models (\eg architecture, pretext task, pre-training dataset) to benefit quality representation. An Adaptive Diverse Quality-aware feature Acquisition (Ada-DQA) framework is proposed to capture desired quality-related features generated by these frozen pretrained models. By leveraging the Quality-aware Acquisition Module (QAM), the framework is able to extract more essential and relevant features to represent quality. Finally, the learned quality representation is utilized as supplementary supervisory information, along with the supervision of the labeled quality score, to guide the training of a relatively lightweight VQA model in a knowledge distillation manner, which largely reduces the computational cost during inference. Experimental results on three mainstream no-reference VQA benchmarks clearly show the superior performance of Ada-DQA in comparison with current state-of-the-art approaches without using extra training data of VQA.Comment: 10 pages, 5 figures, to appear in ACM MM 202

    Enhanced mechanical properties in ÎČ-Ti alloy aged from recrystallized ultrafine ÎČ grains

    Get PDF
    Ultrafine ÎČ grain structures with recrystallized morphologies were fabricated by severe plastic deformation and subsequent annealing in Ti-10Mo-8 V-1Fe-3.5Al alloy. The minimum mean ÎČ grain size of 480 nm was obtained for the first time as a recrystallized structure in Ti alloys. Precipitation behavior of α in subsequent aging significantly changed with decreasing the recrystallized ÎČ grain size. Both tensile strength and total ductility of the aged Ti-alloy were increased by the ÎČ grain refinement. Tensile strength of 1.6 GPa and total elongation of 9.1% were achieved in the aged specimen having the prior ÎČ grain size of 480 nm, which was attributed to its finer and more homogeneous precipitated microstructure having a mixture of nanoscale thin-plate α and globular α without side α plates along ÎČ grain boundaries

    Achieving large super-elasticity through changing relative easiness of deformation modes in Ti-Nb-Mo alloy by ultra-grain refinement

    Get PDF
    Large super-elasticity approaching its theoretically expected value was achieved in Ti-13.3Nb-4.6Mo alloy having an ultrafine-grained ÎČ-phase. In-situ synchrotron radiation X-ray diffraction analysis revealed that the dominant yielding mechanism changed from dislocation slip to martensitic transformation by decreasing the ÎČ-grain size down to sub-micrometer. Different grain size dependence of the critical stress to initiate dislocation slip and martensitic transformation, which was reflected by the transition of yielding behavior, was considered to be the main reason for the large super-elasticity in the ultrafine-grained specimen. The present study clarified that ultra-grain refinement down to sub-mirometer scale made dislocation slips more difficult than martensitic transformation, leading to an excellent super-elasticity close to the theoretical limit in the ÎČ-Ti alloy

    Exploring the column elimination optimization in LIF-STDP networks

    No full text
    Spiking neural networks using Leaky-Integrate-and-Fire (LIF) neurons and Spike-timing-depend Plasticity (STDP) learning, are commonly used as more biological possible networks. Compare to DNNs and RNNs, the LIF-STDP networks are models which are closer to the biological cortex. LIF-STDP neurons use spikes to communicate with each other, and they learn through the correlation among these pre- and post-synaptic spikes. Simulation of such networks usually requires high-performance supercomputers which are almost all based on von Neumann architecture that separates storage and computation. In von Neumann architecture solutions, memory access is the bottleneck even for highly optimized Application-Specific Integrated Circuits (ASICs). In this thesis, we propose an optimization method that can reduce the memory access cost by avoiding a dual-access pattern. In LIF-STDP networks, the weights usually are stored in the form of a two-dimensional matrix. Pre- and post-synaptic spikes trigger row and column access correspondingly. But this dual-access pattern is very costly for DRAM. We eliminate the column access by introducing a post-synaptic buffer and an approximation function. The post-synaptic spikes are recorded in the buffer and are processed at pre-synaptic spikes together with the row updates. This column update elimination method will introduce errors due to the limited buffer size. In our error analysis, the experiments show that the probability of introducing intolerable errors can be bounded to a very small number with proper buffer size and approximation function. We also present a performance analysis of the Column Update Elimination (CUE) optimization. The error analysis of the column updates elimination method is the main contribution of our work.Spikande neurala nÀtverk som anvÀnder LIF-neuroner och STDP-inlÀrning, anvÀnds vanligtvis som ett mer biologiskt möjligt nÀtverk. JÀmfört med DNN och RNN Àr LIF-STDP-nÀtverken modeller nÀrmare den biologiska cortex. LIFSTDP-neuroner anvÀnder spikar för att kommunicera med varandra, och de lÀr sig genom korrelationen mellan dessa pre- och postsynaptiska spikar. Simulering av sÄdana nÀtverk krÀver vanligtvis högpresterande superdatorer som nÀstan alla Àr baserade pÄ von Neumann-arkitektur som separerar lagring och berÀkning. I von Neumanns arkitekturlösningar Àr minnesÄtkomst flaskhalsen Àven för högt optimerade Application-Specific Integrated Circuits (ASIC). I denna avhandling föreslÄr vi en optimeringsmetod som kan minska kostnaden för minnesÄtkomst genom att undvika ett dubbelÄtkomstmönster. I LIF-STDPnÀtverk lagras vikterna vanligtvis i form av en tvÄdimensionell matris. Preoch postsynaptiska toppar kommer att utlösa rad- och kolumnÄtkomst pÄ motsvarande sÀtt. Men detta mönster med dubbel Ätkomst Àr mycket dyrt i DRAM. Vi eliminerar kolumnÄtkomsten genom att införa en postsynaptisk buffert och en approximationsfunktion. De postsynaptiska topparna registreras i bufferten och bearbetas vid presynaptiska toppar tillsammans med raduppdateringarna. Denna metod för eliminering av kolumnuppdatering kommer att introducera fel pÄ grund av den begrÀnsade buffertstorleken. I vÄr felanalys visar experimenten att sannolikheten för att införa oacceptabla fel kan begrÀnsas till ett mycket litet antal med korrekt buffertstorlek och approximationsfunktion. Vi presenterar ocksÄ en prestandaanalys av CUE-optimeringen. Felanalysen av elimineringsmetoden för kolumnuppdateringar Àr det huvudsakliga bidraget frÄn vÄrt arbet

    Targeting Senescent Cells to Improve Wound Healing Using a p21-Cre Mouse Model

    No full text
    Our research studies how clearing p21 highly expressing senescent cells may improve wound healing using the p21 Cre mouse model. Senescent cells are linked to driving many age related pathologies and chronic diseases, with p21 expression being a hallmark of certain senescentcell populations. Our lab created a novel mouse model, the p21 cre model, that utilizes the Cre recombinase and loxP system to eliminate senescent cells in the presence of tamoxifen. Using this model, we showed that clearing p21 senescent cells improves wound healing in lean mice and alters cellular markers related to senescence and inflammation. We hope to eventually develop our findings into clinically applicable strategies for targeting wound healing, especially chronic, non-healing wounds that occur with conditions like Type II diabetes and obesity

    Bathymetry predicting using the altimetry gravity anomalies in South China Sea

    No full text
    In South China Sea (112°E–119°E, 12°N–20°N), 81159 ship soundings published by NGDC (National Geophysics Data Center) and the altimetry gravity anomalies published by SIO (Scripps Institute of Oceanography) were used to predict bathymetry by GGM (gravity-geologic method) and SAS (Smith and Sandwell) method respectively. The residual 40576 ship soundings were used to estimate precisions of the predicted bathymetry models. Results showed that: the standard deviation of difference between the GGM model and ship soundings was 59.75 m and the relative accuracy was 1.86%. The SAS model is 60.07 m and 1.87%. The power spectral densities of the ETOPO1, SIO, GGM and SAS models were also compared and analyzed. At last, we presented an integrated bathymetry model by weighted averaging method, the weighted factors were determined by precisions of the ETOPO1, SIO, GGM, and SAS model respectively. Keywords: Gravity-geologic method, Smith and Sandwell method, Bathymetry, Gravity anomaly, Power spectral density analysi

    Downward continuation of airborne gravimetry data based on Poisson integral iteration method

    No full text
    The research and application of airborne gravimetry technology has become one of the hottest topics in gravity field in recent years. Downward continuation is one of the key steps in airborne gravimetry data processing, and the quality of continuation results directly influence the further application of surveying data. The Poisson integral iteration method is proposed in this paper, and the modified Poisson integral discretization formulae are also introduced in the downward continuation of airborne gravimerty data. For the test area in this paper, compared with traditional Poisson integral discretization formula, the continuation result of modified formulae is improved by 10.8 mGal, and the precision of Poisson integral iteration method is in the same amplitude as modified formulae. So the Poisson integral iteration method can reduce the discretization error of Poisson integral formula effectively. Therefore, the research achievements in this paper can be applied directly in the data processing of our country's airborne scalar and vector gravimetry

    Genetically controlling VACUOLAR PHOSPHATE TRANSPORTER 1 contributes to low-phosphorus seeds in Arabidopsis

    No full text
    Phosphorus (P) is an indispensable nutrient for seed germination, but the seeds always store excessive P over demand. High-P seeds of feeding crops lead to environmental and nutrition issues, because phytic acid (PA), the major form of P in seeds, cannot be digested by mono-gastric animals. Therefore, reduction of P level in seeds has become an imperative task in agriculture. Our study here suggested that both VPT1 and VPT3, two vacuolar phosphate transporters responsible for vacuolar Pi sequestration, were downregulated in leaves during the flowering stage, which led to less Pi accumulated in leaves and more Pi allocated to reproductive organs, and thus high-P containing seeds. To reduce the total P content in seeds, we genetically regulated VPT1 during the flowering stage and found that overexpression of VPT1 in leaves could reduce P content in seeds without affecting the production and seed vigor. Therefore, our finding provides a potential strategy to reduce the P level of the seeds to prevent the nutrition over-accumulation pollution
    corecore