318 research outputs found

    Autoentity: automated entity detection from massive text corpora

    Get PDF
    Entity detection is one of the fundamental tasks in Natural Language Processing and Information Retrieval. Most existing methods rely on human annotated data and hand-crafted linguistic features, which makes it hard to apply the model to an emerging domain. In this paper, we propose a novel automated entity detection framework, called AutoEntity, that performs automated phrase mining to create entity mention candidates and enforces lexico-syntactic rules to select entity mentions from candidates. Our experiments on real-world datasets in different domains and multiple languages have demonstrated the effectiveness and robustness of the proposed method

    Analysis and design of a magnetically levitated planar motor with novel multilayer windings

    Get PDF
    This paper proposes a novel permanent magnet planar motor with moving multilayer orthogonal overlapping windings. This novel motor topology can achieve a five-degrees-of-freedom drive using two sets of x-direction windings and two sets of y-direction windings in a coreless configuration. The orthogonal multilayer construction guarantees a high utilization of the magnetic field and realizes decoupling between the x-direction thrust and the y-direction thrust. The topology and operating principle of the planar motor are introduced in this paper. The analytical modeling of the motor is established based on the equivalent current method, and the expressions of forces are derived. The force characteristics of the two-layer and three-layer winding topologies are compared, and the design guidelines of a planar motor are proposed. The analytical and 3-D finite-element model results are validated with the experimental results of a tested prototype

    DiffusionMat: Alpha Matting as Sequential Refinement Learning

    Full text link
    In this paper, we introduce DiffusionMat, a novel image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes. Diverging from conventional methods that utilize trimaps merely as loose guidance for alpha matte prediction, our approach treats image matting as a sequential refinement learning process. This process begins with the addition of noise to trimaps and iteratively denoises them using a pre-trained diffusion model, which incrementally guides the prediction towards a clean alpha matte. The key innovation of our framework is a correction module that adjusts the output at each denoising step, ensuring that the final result is consistent with the input image's structures. We also introduce the Alpha Reliability Propagation, a novel technique designed to maximize the utility of available guidance by selectively enhancing the trimap regions with confident alpha information, thus simplifying the correction task. To train the correction module, we devise specialized loss functions that target the accuracy of the alpha matte's edges and the consistency of its opaque and transparent regions. We evaluate our model across several image matting benchmarks, and the results indicate that DiffusionMat consistently outperforms existing methods. Project page at~\url{https://cnnlstm.github.io/DiffusionMa

    Presence of virus neutralizing antibodies in cerebral spinal fluid correlates with non-lethal rabies in dogs.

    Get PDF
    BACKGROUND: Rabies is traditionally considered a uniformly fatal disease after onset of clinical manifestations. However, increasing evidence indicates that non-lethal infection as well as recovery from flaccid paralysis and encephalitis occurs in laboratory animals as well as humans. METHODOLOGY/PRINCIPAL FINDINGS: Non-lethal rabies infection in dogs experimentally infected with wild type dog rabies virus (RABV, wt DRV-Mexico) correlates with the presence of high level of virus neutralizing antibodies (VNA) in the cerebral spinal fluid (CSF) and mild immune cell accumulation in the central nervous system (CNS). By contrast, dogs that succumbed to rabies showed only little or no VNA in the serum or in the CSF and severe inflammation in the CNS. Dogs vaccinated with a rabies vaccine showed no clinical signs of rabies and survived challenge with a lethal dose of wild-type DRV. VNA was detected in the serum, but not in the CSF of immunized dogs. Thus the presence of VNA is critical for inhibiting virus spread within the CNS and eventually clearing the virus from the CNS. CONCLUSIONS/SIGNIFICANCE: Non-lethal infection with wt RABV correlates with the presence of VNA in the CNS. Therefore production of VNA within the CNS or invasion of VNA from the periphery into the CNS via compromised blood-brain barrier is important for clearing the virus infection from CNS, thereby preventing an otherwise lethal rabies virus infection

    Co-design Hardware and Algorithm for Vector Search

    Full text link
    Vector search has emerged as the foundation for large-scale information retrieval and machine learning systems, with search engines like Google and Bing processing tens of thousands of queries per second on petabyte-scale document datasets by evaluating vector similarities between encoded query texts and web documents. As performance demands for vector search systems surge, accelerated hardware offers a promising solution in the post-Moore's Law era. We introduce \textit{FANNS}, an end-to-end and scalable vector search framework on FPGAs. Given a user-provided recall requirement on a dataset and a hardware resource budget, \textit{FANNS} automatically co-designs hardware and algorithm, subsequently generating the corresponding accelerator. The framework also supports scale-out by incorporating a hardware TCP/IP stack in the accelerator. \textit{FANNS} attains up to 23.0×\times and 37.2×\times speedup compared to FPGA and CPU baselines, respectively, and demonstrates superior scalability to GPUs, achieving 5.5×\times and 7.6×\times speedup in median and 95\textsuperscript{th} percentile (P95) latency within an eight-accelerator configuration. The remarkable performance of \textit{FANNS} lays a robust groundwork for future FPGA integration in data centers and AI supercomputers.Comment: 11 page
    corecore