1,434 research outputs found

    The Role of Processing Fluency in Source Memory and Metamemory

    Get PDF
    Processing fluency influences various judgements in memory and cognition such as fluency-based familiarity in tests of item recognition memory. However, less is known about the interplay between fluency and source information in recognition memory and metamemory phenomena. The present thesis investigated the relationship between perceptual fluency and the accuracy of source memory decisions (Experiments 1-3b), as well as the contribution of perceptual fluency to the font size effect (i.e., the tendency to rate larger font words as easier to remember than smaller font words, despite font size having no effect on retention performance) in judgements of learning (Experiments 4-6). Fluency was indexed via identification response times (RTs) derived from adapted versions of the continuous identification (CID) task, in which stimuli gradually clarified through progressive demasking. Identification RTs were faster in trials with correct retrieval of source information compared to trials for which source could not be accurately retrieved, and JOLs were indirectly increased by the faster identification RTs associated with a larger font size. These findings suggest that fluency is related both to source memory and metamemory judgements

    High MET gene copy number predicted poor prognosis in primary intestinal diffuse large B-cell lymphoma

    Get PDF
    BACKGROUND: MET is a proto-oncogene with its copy number (CN) alterations been reported in some cancers, but not in primary intestinal diffuse large B-cell lymphoma (PI-DLBL) yet. METHODS: In this retrospective study, we performed histology and chart reviews, immunohistochemistry and quantitative polymerase chain reaction for MET CN alterations on 28 surgically resected PI-DLBLs. RESULTS: There were 12 men and 16 women with a median age of 70 and a mean follow-up of 32 months. The median MET CN was 2.20 (range, 1.04 to 3.35). CN gain was observed in 11 cases, including 5 with CN greater than 3. Nine patients (32%) had diploid CN and eight (29%) with CN loss. Patients with gain or diploid CN showed significantly worse prognosis (P = 0.046) than those with CN loss. Furthermore, MET CN greater than 3 was associated with an adverse outcome (P = 0.003). Intestinal perforation at presentation was the sole clinicopathological factor associated with a poor prognosis (P = 0.004) and perforation was correlated with CN greater than 3 (P = 0.002). CONCLUSIONS: Our finding of MET CN gain as a poor prognostic factor in PI-DLBL patients might serve as the rationale for targeting MET signaling pathway in the treatment of these patients

    A Simulation Study on von Karman Vortex Shedding with Navier-Stokes and Shallow-Water Models

    Get PDF
    This study aims to investigate the advantages of employing numerical models based on Shallow-water equations for simulating von Karman vortex shedding. Furthermore, a comparative analysis with Navier-Stokes equations will be conducted to assess their effectiveness. In addition to Reynolds number (Re), Froude number (Fr), relevant to water depth, plays an important role in the Shallow-Water modeling of the von Karman vortex. In this study, simulations of 2D von Karman vortex shedding are performed using the Navier-Stokes model and Shallow-Water model, employing the least-squares finite-element method for space discretization and θ-method for time integration. The computed vortices characteristics, including the recirculation zone behind the cylinder, vortices size, and frequency, are presented. In the Navier-Stokes modeling, the computed results indicate that the size of vortices in space decreases and the Strouhal number increases as Re increases. In the Shallow-Water modeling for the same Re condition, the size of vortices increases and the Strouhal number decreases as Fr increases

    Advances of Robust Subspace Face Recognition

    Get PDF
    Face recognition has been widely applied in fast video surveillance and security systems and smart home services in our daily lives. Over past years, subspace projection methods, such as principal component analysis (PCA), linear discriminant analysis (LDA), are the well-known algorithms for face recognition. Recently, linear regression classification (LRC) is one of the most popular approaches through subspace projection optimizations. However, there are still many problems unsolved in severe conditions with different environments and various applications. In this chapter, the practical problems including partial occlusion, illumination variation, different expression, pose variation, and low resolution are addressed and solved by several improved subspace projection methods including robust linear regression classification (RLRC), ridge regression (RR), improved principal component regression (IPCR), unitary regression classification (URC), linear discriminant regression classification (LDRC), generalized linear regression classification (GLRC) and trimmed linear regression (TLR). Experimental results show that these methods can perform well and possess high robustness against problems of partial occlusion, illumination variation, different expression, pose variation and low resolution

    GRASP: Grammar- and Syntax-based Pattern-Finder for Collocation and Phrase Learning

    Get PDF

    Efficient Quantization-aware Training with Adaptive Coreset Selection

    Full text link
    The expanding model size and computation of deep neural networks (DNNs) have increased the demand for efficient model deployment methods. Quantization-aware training (QAT) is a representative model compression method to leverage redundancy in weights and activations. However, most existing QAT methods require end-to-end training on the entire dataset, which suffers from long training time and high energy costs. Coreset selection, aiming to improve data efficiency utilizing the redundancy of training data, has also been widely used for efficient training. In this work, we propose a new angle through the coreset selection to improve the training efficiency of quantization-aware training. Based on the characteristics of QAT, we propose two metrics: error vector score and disagreement score, to quantify the importance of each sample during training. Guided by these two metrics of importance, we proposed a quantization-aware adaptive coreset selection (ACS) method to select the data for the current training epoch. We evaluate our method on various networks (ResNet-18, MobileNetV2), datasets(CIFAR-100, ImageNet-1K), and under different quantization settings. Compared with previous coreset selection methods, our method significantly improves QAT performance with different dataset fractions. Our method can achieve an accuracy of 68.39% of 4-bit quantized ResNet-18 on the ImageNet-1K dataset with only a 10% subset, which has an absolute gain of 4.24% compared to the baseline.Comment: Code: https://github.com/HuangOwen/QAT-AC
    • …
    corecore