197 research outputs found

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Analysis of total urinary catecholamines by liquid chromatography: methodology, routine experience and clinical interpretations of results

    Get PDF
    A simple routine method is described for simultaneous assay of total urinary adrenaline, noradrenaline and dopamine. The catecholamines are pre-purified on a small ion-exchange column, separated by reversed phase ion-pair liquid chromatography, and are quantitated by electrochemical detection. The method was routinely applied to 422 urines. Elevated values were found in four urine specimens obtained from patients with histologically proven phaeochromocytomas. Virtually no interference by endogenous or exogenous compounds was found. Values for urinary catecholamines determined by fluorimetric analysis agreed with those obtained by high pressure liquid chromatography with electrochemical detection. Within-day CVs for the compounds ranged from 5.2-11.9%, between-day CVs from 3.3-6.6%. The normal range (95% confidence level) was 20-230 micrograms/24 h for noradrenaline and 1-35 micrograms/24 h for adrenaline

    Ad Hoc Multi-Input Functional Encryption

    Get PDF
    Consider sources that supply sensitive data to an aggregator. Standard encryption only hides the data from eavesdroppers, but using specialized encryption one can hope to hide the data (to the extent possible) from the aggregator itself. For flexibility and security, we envision schemes that allow sources to supply encrypted data, such that at any point a dynamically-chosen subset of sources can allow an agreed-upon joint function of their data to be computed by the aggregator. A primitive called multi-input functional encryption (MIFE), due to Goldwasser et al. (EUROCRYPT 2014), comes close, but has two main limitations: - it requires trust in a third party, who is able to decrypt all the data, and - it requires function arity to be fixed at setup time and to be equal to the number of parties. To drop these limitations, we introduce a new notion of ad hoc MIFE. In our setting, each source generates its own public key and issues individual, function-specific secret keys to an aggregator. For successful decryption, an aggregator must obtain a separate key from each source whose ciphertext is being computed upon. The aggregator could obtain multiple such secret-keys from a user corresponding to functions of varying arity. For this primitive, we obtain the following results: - We show that standard MIFE for general functions can be bootstrapped to ad hoc MIFE for free, i.e. without making any additional assumption. - We provide a direct construction of ad hoc MIFE for the inner product functionality based on the Learning with Errors (LWE) assumption. This yields the first construction of this natural primitive based on a standard assumption. At a technical level, our results are obtained by combining standard MIFE schemes and two-round secure multiparty computation (MPC) protocols in novel ways highlighting an interesting interplay between MIFE and two-round MPC

    Antibiotic susceptibility pattern and biofilm formation in coagulase negative staphylococci

    Get PDF
    This item has no abstract. Follow the links below to access the full text.</jats:p

    Molecular Characterization and Antimicrobial Susceptibility of Staphylococcus aureus Isolates from Clinical Infection and Asymptomatic Carriers in Southwest Nigeria

    Get PDF
    Few reports from Africa suggest that resistance pattern, virulence factors and genotypes differ between Staphylococcus aureus from nasal carriage and clinical infection.We therefore compared antimicrobial resistance, selected virulence factors and genotypes of S. aureus from nasal carriage and clinical infection in Southwest Nigeria. Non-duplicate S. aureus isolates were obtained from infection (n = 217) and asymptomatic carriers (n = 73) during a cross sectional study in Lagos and Ogun States, Nigeria from 2010–2011. Susceptibility testing was performed using Vitek automated systems. Selected virulence factors were detected by PCR. The population structure was assessed using spa typing. The spa clonal complexes (spa-CC) were deduced using the Based Upon Repeat Pattern algorithm (BURP). Resistance was higher for aminoglycosides in clinical isolates while resistances to quinolones and tetracycline were more prevalent in carrier isolates. The Panton-Valentine leukocidin (PVL) was more frequently detected in isolates from infection compared to carriage (80.2 vs 53.4%; p<0.001, chi2-test). Seven methicillin resistant S. aureus isolates were associated with spa types t002, t008, t064, t194, t8439, t8440 and t8441. The predominant spa types among the methicillin-susceptible S. aureus isolates were t084 (65.5%), t2304 (4.4%) and t8435 (4.1%). spa-CC 084 was predominant among isolates from infection (80.3%, n = 167) and was significantly associated with PVL (OR = 7.1, 95%CI: 3.9– 13.2, p<0.001, chi2- test). In conclusion, PVL positive isolates were more frequently detected among isolates from infection compared to carriage and are associated with spa-CC 084

    Training Curricula for Open Domain Answer Re-Ranking

    Get PDF
    In precision-oriented tasks like answer ranking, it is more important to rank many relevant answers highly than to retrieve all relevant answers. It follows that a good ranking strategy would be to learn how to identify the easiest correct answers first (i.e., assign a high ranking score to answers that have characteristics that usually indicate relevance, and a low ranking score to those with characteristics that do not), before incorporating more complex logic to handle difficult cases (e.g., semantic matching or reasoning). In this work, we apply this idea to the training of neural answer rankers using curriculum learning. We propose several heuristics to estimate the difficulty of a given training sample. We show that the proposed heuristics can be used to build a training curriculum that down-weights difficult samples early in the training process. As the training process progresses, our approach gradually shifts to weighting all samples equally, regardless of difficulty. We present a comprehensive evaluation of our proposed idea on three answer ranking datasets. Results show that our approach leads to superior performance of two leading neural ranking architectures, namely BERT and ConvKNRM, using both pointwise and pairwise losses. When applied to a BERT-based ranker, our method yields up to a 4% improvement in MRR and a 9% improvement in P@1 (compared to the model trained without a curriculum). This results in models that can achieve comparable performance to more expensive state-of-the-art techniques

    Expansion via Prediction of Importance with Contextualization

    Get PDF
    The identification of relevance with little textual context is a primary challenge in passage retrieval. We address this problem with a representation-based ranking approach that: (1) explicitly models the importance of each term using a contextualized language model; (2) performs passage expansion by propagating the importance to similar terms; and (3) grounds the representations in the lexicon, making them interpretable. Passage representations can be pre-computed at index time to reduce query-time latency. We call our approach EPIC (Expansion via Prediction of Importance with Contextualization). We show that EPIC significantly outperforms prior importance-modeling and document expansion approaches. We also observe that the performance is additive with the current leading first-stage retrieval methods, further narrowing the gap between inexpensive and cost-prohibitive passage ranking approaches. Specifically, EPIC achieves a MRR@10 of 0.304 on the MS-MARCO passage ranking dataset with 78ms average query latency on commodity hardware. We also find that the latency is further reduced to 68ms by pruning document representations, with virtually no difference in effectiveness

    Efficient Document Re-Ranking for Transformers by Precomputing Term Representations

    Get PDF
    Deep pretrained transformer networks are effective at various ranking tasks, such as question answering and ad-hoc document ranking. However, their computational expenses deem them cost-prohibitive in practice. Our proposed approach, called PreTTR (Precomputing Transformer Term Representations), considerably reduces the query-time latency of deep transformer networks (up to a 42x speedup on web document ranking) making these networks more practical to use in a real-time ranking scenario. Specifically, we precompute part of the document term representations at indexing time (without a query), and merge them with the query representation at query time to compute the final ranking score. Due to the large size of the token representations, we also propose an effective approach to reduce the storage requirement by training a compression layer to match attention scores. Our compression technique reduces the storage required up to 95% and it can be applied without a substantial degradation in ranking performance
    • …
    corecore