364 research outputs found
Aislamiento de cocaína y benzoilecgonina en muestras de orina por extracciones líquido-líquido y en fase sólida y confirmación por Cromatografía líquida de alta resolución (HPLC)
El consumo ilícito de cocaína se ha incrementado extraordinariamente en los últimos años, por lo que resulta indispensable el desarrollo de metodologías seguras, rápidas y eficientes para su detección. En este trabajo se desarrollo una técnica de HPLC de fase reversa con detector de UV para identificar y cuantificar a la cocaína y la benzoilecgonina, con resultados satisfactorios en los parámetros del control de calidad.Se realizó un estudio de recobrado para el aislamiento de la cocaína y la benzoilecgonina en orina con extracciones liquido liquido y en fase sólida con tres tipos de columnas comerciales (Bond Elut Certify, Extrelut 3 y Supelclean LC-18). Las fracciones obtenidas con la extracción líquido-líquido resultaron muy contaminados, con porcentajes de recobrados bajos (45% y 28% para la cocaína y la benzoilecgonina, respectivamente. En las extracciones en fase sólida para la cocaína resultaron muy eficientes las columnas Supelclean LC-18 (87-102 %) y Extrelut 3 (70-102 %), mientras que para el aislamiento de la benzoilecgonina resultaron más eficiente las columnas Extrelut 3 (86-101 %) y Supelclean LC-18 (73-89%). Las columnas Bond Elut Certify resultaron poco eficientes para el aislamiento de la cocaína (53-79%) y con un recobrado aún mas bajo para su metabolito (2-21% %)
Image Compositing for Segmentation of Surgical Tools Without Manual Annotations
Producing manual, pixel-accurate, image segmentation labels is tedious and time-consuming. This is often a rate-limiting factor when large amounts of labeled images are required, such as for training deep convolutional networks for instrument-background segmentation in surgical scenes. No large datasets comparable to industry standards in the computer vision community are available for this task. To circumvent this problem, we propose to automate the creation of a realistic training dataset by exploiting techniques stemming from special effects and harnessing them to target training performance rather than visual appeal. Foreground data is captured by placing sample surgical instruments over a chroma key (a.k.a. green screen) in a controlled environment, thereby making extraction of the relevant image segment straightforward. Multiple lighting conditions and viewpoints can be captured and introduced in the simulation by moving the instruments and camera and modulating the light source. Background data is captured by collecting videos that do not contain instruments. In the absence of pre-existing instrument-free background videos, minimal labeling effort is required, just to select frames that do not contain surgical instruments from videos of surgical interventions freely available online. We compare different methods to blend instruments over tissue and propose a novel data augmentation approach that takes advantage of the plurality of options. We show that by training a vanilla U-Net on semi-synthetic data only and applying a simple post-processing, we are able to match the results of the same network trained on a publicly available manually labeled real dataset
Robust Lexically Mediated Compensation for Coarticulation: Christmash Time Is Here Again
First published: 20 April 2021A long-standing question in cognitive science is how high-level knowledge is integrated with sensory
input. For example, listeners can leverage lexical knowledge to interpret an ambiguous speech
sound, but do such effects reflect direct top-down influences on perception or merely postperceptual
biases? A critical test case in the domain of spoken word recognition is lexically mediated compensation
for coarticulation (LCfC). Previous LCfC studies have shown that a lexically restored context
phoneme (e.g., /s/ in Christma#) can alter the perceived place of articulation of a subsequent target
phoneme (e.g., the initial phoneme of a stimulus from a tapes-capes continuum), consistent with the
influence of an unambiguous context phoneme in the same position. Because this phoneme-to-phoneme
compensation for coarticulation is considered sublexical, scientists agree that evidence for LCfC would
constitute strong support for top–down interaction. However, results from previous LCfC studies have
been inconsistent, and positive effects have often been small. Here, we conducted extensive piloting of
stimuli prior to testing for LCfC. Specifically, we ensured that context items elicited robust phoneme
restoration (e.g., that the final phoneme of Christma# was reliably identified as /s/) and that unambiguous
context-final segments (e.g., a clear /s/ at the end of Christmas) drove reliable compensation for
coarticulation for a subsequent target phoneme.We observed robust LCfC in a well-powered, preregistered
experiment with these pretested items (N = 40) as well as in a direct replication study (N = 40).
These results provide strong evidence in favor of computational models of spoken word recognition
that include top–down feedback
GIFT-Grab: Real-time C++ and Python multi-channel video capture, processing and encoding API
GIFT-Grab is an open-source API for acquiring, processing and encoding video streams in real time. GIFT-Grab supports video acquisition using various frame-grabber hardware as well as from standard-compliant network streams and video files. The current GIFT-Grab release allows for multi-channel video acquisition and encoding at the maximum frame rate of supported hardware – 60 frames per second (fps). GIFT-Grab builds on well-established highly configurable multimedia libraries including FFmpeg and OpenCV. GIFT-Grab exposes a simplified high-level API, aimed at facilitating integration into client applications with minimal coding effort. The core implementation of GIFT-Grab is in C++11. GIFT-Grab also features a Python API compatible with the widely used scientific computing packages NumPy and SciPy. GIFT-Grab was developed for capturing multiple simultaneous intra-operative video streams from medical imaging devices. Yet due to the ubiquity of video processing in research, GIFT-Grab can be used in many other areas. GIFT-Grab is hosted and managed on the software repository of the Centre for Medical Image Computing (CMIC) at University College London, and is also mirrored on GitHub. In addition it is available for installation from the Python Package Index (PyPI) via the pip installation tool
Cloning, in silico structural characterization and expression analysis of MfAtr4, an ABC transporter from the banana pathogen Mycosphaerella fijiensis
ABC transporters are membrane proteins that use the energy released from the hydrolysis of ATP to drive the transport of compounds across biological membranes. In some plants, pathogenic fungi ABC transporters play a role as virulence factors by mediating the export of plant defense compounds or fungal virulence factors. Mycosphaerella fijiensis, the causal agent of black Sigatoka disease in banana, is the main constraint for the banana industry worldwide. So far, little is known about molecular mechanism that it uses to infect the host. In this study, degenerated primers designed from fungal ABC transporters known to be involved in virulence were used to isolate homologs from M. fijiensis. Here, we reported the full cloning of MfAtr4 a putative ortholog of MgAtr4, an ABC transporter of the related Mycosphaerella graminicola with a function in virulence. Similarities and differences with its presumed ortholog MgAtr4 are described, and the putative function of MfAtr4 are discussed. Analysis of MfAtr4 gene expression in field banana samples exhibiting visible symptoms of black Sigatoka disease indicated a higher expression of MfAtr4 during the first symptomatic stages in comparison to the late necrotrophic phases, suggesting a role for MfAtr4 in the early stages of pathogenic development of M. fijiensis.Key words: ABC transporters, virulence factors, MgAtr4 ortholog, Mycosphaerella fijiensis, black Sigatoka, Musa sp
Extension of the SIESTA MHD equilibrium code to free-plasma-boundary problems
is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for three-dimensional magnetic configurations. Since SIESTA does not assume closed magnetic surfaces, the solution can exhibit magnetic islands and stochastic regions. In its original implementation SIESTA addressed only fixed-boundary problems. That is, the shape of the plasma edge, assumed to be a magnetic surface, was kept fixed as the solution iteratively converges to equilibrium. This condition somewhat restricts the possible applications of SIESTA. In this paper, we discuss an extension that will enable SIESTA to address free-plasma-boundary problems, opening up the possibility of investigating problems in which the plasma boundary is perturbed either externally or internally. As an illustration, SIESTA is applied to a configuration of the W7-X stellarator.This research was funded in part by the Ministerio de Economía, Industria y Competitividad of Spain, Grant No. ENE2015-68265. This research was carried out in part at the Max-Planck-Institute for Plasma Physics in Greifswald (Germany), whose hospitality is gratefully acknowledged. This research was supported in part by the U.S. Department of Energy, Office of Fusion Energy Sciences under Award DE-AC05-00OR22725. SIESTA runs have been carred out in Uranus, a supercomputer cluster located at Universidad Carlos III de Madrid and funded jointly by the European Regional Development Funds (EU-FEDER) Project No. UNC313-4E-2361, and by the Ministerio de Economía, Industria y Competitividad via the National Project Nos. ENE2009-12213-C03-03, ENE2012-33219, and ENE2012-31753
P-P Total Cross Sections at VHE from Accelerator Data
Comparison of P-P total cross-sections estimations at very high energies -
from accelerators and cosmic rays - shows a disagreement amounting to more than
10 %, a discrepancy which is beyond statistical errors. Here we use a
phenomenological model based on the Multiple-Diffraction approach to
successfully describe data at accelerator energies. The predictions of the
model are compared with data On the basis of regression analysis we determine
confident error bands, analyzing the sensitivity of our predictions to the
employed data for extrapolation. : using data at 546 and 1.8 TeV, our
extrapolations for p-p total cross-sections are only compatible with the Akeno
cosmic ray data, predicting a slower rise with energy than other cosmic ray
results and other extrapolation methods. We discuss our results within the
context of constraints in the light of future accelerator and cosmic ray
experimental results.Comment: 26 pages aqnd 11 figure
IKs Computational Modeling to Enforce the Investigation of D242N, a KV7.1 LQTS Mutation
A KCNQ1 mutation, D242N, was found in a pair of twins and characterized at the cellular level. To investigate whether and how the mutation causes the clinically observed lost adaptation to fast heart rate, we performed a computational study. Firstly, we identified a new I Ks model based on voltage clamp experimental data. Then we included this formulation in the human action potential model of O'Hara Rudy (ORd) and simulate d the effects of the mutation. We also included adrenergic stimulation to the action potential, since the basal adrenergic tone is likely to affect the influence of I Ks on QTc in vivo. Finally, we simulated the pseudo-ECG, taking into account the heterogeneity of the cardiac wall. At the basal rate (60bpm), the mutation had negligible effects for all cell types, whereas at the high rate (180bpm), with concomitant β-adrenergic stimulation (mimicking exercise conditions), the mutant AP failed to adapt its duration to the same extent as the wild-type AP (e.g. 281ms vs. 267ms in M cells), due to a smaller amount of I Ks current. Pseudo-ECG results show only a slight rate adaptation, and the simulated QTc was significantly prolonged from 387ms to 493ms, similar to experimental recordings
- …