376 research outputs found

    Learning Meta Model for Zero- and Few-shot Face Anti-spoofing

    Full text link
    Face anti-spoofing is crucial to the security of face recognition systems. Most previous methods formulate face anti-spoofing as a supervised learning problem to detect various predefined presentation attacks, which need large scale training data to cover as many attacks as possible. However, the trained model is easy to overfit several common attacks and is still vulnerable to unseen attacks. To overcome this challenge, the detector should: 1) learn discriminative features that can generalize to unseen spoofing types from predefined presentation attacks; 2) quickly adapt to new spoofing types by learning from both the predefined attacks and a few examples of the new spoofing types. Therefore, we define face anti-spoofing as a zero- and few-shot learning problem. In this paper, we propose a novel Adaptive Inner-update Meta Face Anti-Spoofing (AIM-FAS) method to tackle this problem through meta-learning. Specifically, AIM-FAS trains a meta-learner focusing on the task of detecting unseen spoofing types by learning from predefined living and spoofing faces and a few examples of new attacks. To assess the proposed approach, we propose several benchmarks for zero- and few-shot FAS. Experiments show its superior performances on the presented benchmarks to existing methods in existing zero-shot FAS protocols.Comment: Accepted by AAAI202

    Textural and biochemical changes of scallop Patinopecten yessoensis adductor muscle during low-temperature long-time (LTLT) processing

    Get PDF
    In this study, the effects of low-temperature long-time (LTLT) processing on the quality of Patinopecten yessoensis adductor muscle (PYAM) were investigated at 55°C. The texture of processed PYAM was characterized by textural profile analysis (TPA), and significant increases of cook loss, hardness, and shear force with time during LTLT processing were observed. The degradation of structural proteins was analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), and fragments with molecular weights of 208 kDa (myosin heavy chain, MHC), 97 kDa (paramyosin) and 35–40 kDa, respectively, were among the main products. Chemical characterization revealed elevated levels of activity in cathepsin L and caspase-3 and oxidation of proteins and lipids. Electron spin resonance spin trapping indicated reactive oxygen species (ROS) production in the PYAM during LTLT processing. Based on these results, it is proposed that the sequence of events in PYAM during LTLT processing includes ROS→ endogenous enzyme (involving caspase-3 and cathepsin L) activation →protein degradation→quality changes (texture and color). This revelation helps to further our understanding of the LTLT processing of PYAM, which would lead to better quality control for PYAM products

    ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory

    Full text link
    Large language models (LLMs) with memory are computationally universal. However, mainstream LLMs are not taking full advantage of memory, and the designs are heavily influenced by biological brains. Due to their approximate nature and proneness to the accumulation of errors, conventional neural memory mechanisms cannot support LLMs to simulate complex reasoning. In this paper, we seek inspiration from modern computer architectures to augment LLMs with symbolic memory for complex multi-hop reasoning. Such a symbolic memory framework is instantiated as an LLM and a set of SQL databases, where the LLM generates SQL instructions to manipulate the SQL databases. We validate the effectiveness of the proposed memory framework on a synthetic dataset requiring complex reasoning. The project website is available at https://chatdatabase.github.io/

    Deep Reinforcement Learning with Multitask Episodic Memory Based on Task-Conditioned Hypernetwork

    Full text link
    Deep reinforcement learning algorithms are usually impeded by sampling inefficiency, heavily depending on multiple interactions with the environment to acquire accurate decision-making capabilities. In contrast, humans rely on their hippocampus to retrieve relevant information from past experiences of relevant tasks, which guides their decision-making when learning a new task, rather than exclusively depending on environmental interactions. Nevertheless, designing a hippocampus-like module for an agent to incorporate past experiences into established reinforcement learning algorithms presents two challenges. The first challenge involves selecting the most relevant past experiences for the current task, and the second challenge is integrating such experiences into the decision network. To address these challenges, we propose a novel method that utilizes a retrieval network based on task-conditioned hypernetwork, which adapts the retrieval network's parameters depending on the task. At the same time, a dynamic modification mechanism enhances the collaborative efforts between the retrieval and decision networks. We evaluate the proposed method on the MiniGrid environment.The experimental results demonstrate that our proposed method significantly outperforms strong baselines

    The impact of empirical Marshall vein ethanol infusion as a first-choice intraoperative strategy on the long-term outcomes in patients with persistent atrial fibrillation undergoing mitral isthmus ablation

    Get PDF
    BackgroundMarshall vein ethanol infusion (MVEI) as an additional therapy to conventional catheter ablation (CA) has been proved to be efficacious in patients with persistent atrial fibrillation (PeAF). However, whether empirical MVEI could be the first-line strategy in mitral isthmus (MI) ablation has seldom been investigated. Here, we aim to compare the efficacy, safety, and long-term outcomes between provisional and empirical MVEI in PeAF patients undergoing the index MI ablation procedure.MethodsWe enrolled 133 patients with PeAF either in the provisional group (n = 38, MVEI was performed when conventional endocardial and/or epicardial ablation procedures were inadequate to achieve bidirectional MI block) or in the empirical group (n = 95, MVEI was performed empirically before MI CA).ResultsAll of the baseline characteristics were comparable. Less spontaneous or inducible atrial tachycardias (ATs) were encountered in the empirical group of patients (P < 0.001). More epicardial ablations were applied (26.3% vs. 9.5%, P = 0.016) and a higher incidence of CA-facilitated restoration of sinus rhythm was recorded (86.8% vs. 11.7%, P < 0.001) in the provisional group of patients. Although more fluoroscopy time (6.4[4.2, 9.3] vs. 9.5[5.9, 11.6] min, P = 0.019) and radiation exposure (69.0[25.3, 160.2] vs. 122.0[62.5, 234.1] mGy, P = 0.010) were documented in the empirical group with comparable procedure time, less time (455.9 ± 192.2 vs. 366.5 ± 161.3 s, P = 0.038) was consumed to achieve bidirectional MI block during endocardial ablation in the provisional group. Incidences of procedure-related complications were similar between the two groups. During a 16.5 ± 4.4-month follow-up, the empirical group of patients showed a significantly higher rate of freedom from AT recurrence (95.8% vs. 81.6%, log-rank P = 0.003), while the rate of freedom from AF or atrial tachyarrhythmias (combining AF and AT) was similar. Both univariate (HR 0.19, 95% CI 0.05–0.64, P = 0.008) and multivariate (HR 0.25, 95% CI 0.07–0.92, P = 0.037) Cox regression analyses indicated that empirical MVEI was independently associated with lower long-term AT recurrence.ConclusionAmong patients with PeAF who underwent the index MI ablation procedure, empirical MVEI could reduce endocardial MI ablation time and provide greater long-term freedom from AT recurrence

    Architectures for Multinode Superconducting Quantum Computers

    Full text link
    Many proposals to scale quantum technology rely on modular or distributed designs where individual quantum processors, called nodes, are linked together to form one large multinode quantum computer (MNQC). One scalable method to construct an MNQC is using superconducting quantum systems with optical interconnects. However, a limiting factor of these machines will be internode gates, which may be two to three orders of magnitude noisier and slower than local operations. Surmounting the limitations of internode gates will require a range of techniques, including improvements in entanglement generation, the use of entanglement distillation, and optimized software and compilers, and it remains unclear how improvements to these components interact to affect overall system performance, what performance from each is required, or even how to quantify the performance of each. In this paper, we employ a `co-design' inspired approach to quantify overall MNQC performance in terms of hardware models of internode links, entanglement distillation, and local architecture. In the case of superconducting MNQCs with microwave-to-optical links, we uncover a tradeoff between entanglement generation and distillation that threatens to degrade performance. We show how to navigate this tradeoff, lay out how compilers should optimize between local and internode gates, and discuss when noisy quantum links have an advantage over purely classical links. Using these results, we introduce a roadmap for the realization of early MNQCs which illustrates potential improvements to the hardware and software of MNQCs and outlines criteria for evaluating the landscape, from progress in entanglement generation and quantum memory to dedicated algorithms such as distributed quantum phase estimation. While we focus on superconducting devices with optical interconnects, our approach is general across MNQC implementations.Comment: 23 pages, white pape

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    The evolution of Spanish total factor productivity since the global financial crisis

    Get PDF
    La productividad total de los factores (PTF) se considera el determinante principal del crecimiento económico sostenible a largo plazo. La ausencia de crecimiento en la PTF caracterizó a la economía española desde la fundación de la eurozona hasta el estallido de la crisis financiera mundial [véase García-Santana et al. (2016)]. Este artículo proporciona un análisis detallado de la evolución reciente de la PTF española utilizando datos agregados y microdatos empresariales disponibles hasta 2016. Tres conclusiones emergen de nuestros hallazgos: i) mientras que el crecimiento de la PTF permaneció ausente durante la crisis, se está produciendo un resurgimiento de la PTF durante los últimos añosii) este patrón es explicado principalmente por el aumento inicial y la caída posterior de la ratio capital por empleado, mientras que el papel de la productividad laboral es más modesto, y iii) un aumento generalizado de la ratio capital por empleado de las empresas determina la mayor parte de la disminución de la PTF durante los primeros años de la crisis, mientras que el aumento más reciente de la PTF se explica por la reasignación de recursos hacia empresas con bajos niveles de capital por empleado y elevada PTFTotal factor productivity (TFP) is considered the key determinant of long-term and sustainable economic growth. The dismal evolution of TFP characterized the Spanish economy since the foundation of the Eurozone until the outbreak of the Global Financial Crisis [see García- Santana et al. (2016)]. This article provides an anatomy of the recent evolution of Spanish TFP using both aggregate- and micro-level data available until 2016. Three conclusions emerge from our findings: i) while TFP growth remained subdued during the crisis, a TFP revival is taking place over the last yearsii) this pattern is mostly driven by the rise and fall of the capital-to-labor ratio (capital deepening) while the role of labor productivity is more muted, and iii) an across-the-board increase in firms’ capital-to-labor ratios accounts for most of the TFP decline during the first years of the crisis, while the subsequent TFP revival is explained by the reallocation of resources towards firms with low capital deepenin

    Study of Low-Frequency Sound Absorption Based on Negative Stiffness Membrane Structure

    No full text
    The system stiffness of a negative stiffness membrane structure is widely investigated in metamaterial research, and some special performances have been achieved. While for acoustics, low-frequency absorption still remains a big issue, so in this work, a negative stiffness membrane structure with its theoretical calculation model and experimental verification of sound absorption is established. Moreover, the nonlinear stiffness changes of the thin film under different deformation conditions and different spacing between two permanent magnets are systematically analyzed, obtaining the theoretical stiffness analytical equation of the negative stiffness thin-film structure system. Combined with finite element simulation analysis, the stiffness variation rule and influencing factors of the negative stiffness membrane system are discussed. Specifically, the impact of the mass radius, mass thickness, and film thickness on the magnetic force and system stiffness is analyzed. Based on the acquired testing results, the proper addition of the magnetic suction structure will induce a shift of the absorption peak to a lower frequency region. This work provides useful insights for the further development of the low-frequency sound absorption theory and testing prototype with a negative stiffness membrane structure

    Monocular Visual-Inertial Odometry with an Unbiased Linear System Model and Robust Feature Tracking Front-End

    No full text
    The research field of visual-inertial odometry has entered a mature stage in recent years. However, unneglectable problems still exist. Tradeoffs have to be made between high accuracy and low computation for users. In addition, notation confusion exists in quaternion descriptions of rotation; although not fatal, this may results in unnecessary difficulties in understanding for researchers. In this paper, we develop a visual-inertial odometry which gives consideration to both precision and computation. The proposed algorithm is a filter-based solution that utilizes the framework of the noted multi-state constraint Kalman filter. To dispel notation confusion, we deduced the error state transition equation from scratch, using the more cognitive Hamilton notation of quaternion. We further come up with a fully linear closed-form formulation that is readily implemented. As the filter-based back-end is vulnerable to feature matching outliers, a descriptor-assisted optical flow tracking front-end was developed to cope with the issue. This modification only requires negligible additional computation. In addition, an initialization procedure is implemented, which automatically selects static data to initialize the filter state. Evaluations of proposed methods were done on a public, real-world dataset, and comparisons were made with state-of-the-art solutions. The experimental results show that the proposed solution is comparable in precision and demonstrates higher computation efficiency compared to the state-of-the-art
    • …
    corecore