94 research outputs found
Evaluation of 3D image-treatment algorithms applied to optical-sectioning microscopy
La información extraída de especimenes biológicos es inherentemente tridimensional. Los datos tridimensionales (3D) permiten un mejor entendimiento de las estructuras y los eventos biológicos, comparados con sus proyecciones bidimensionales (2D), aunque a veces son más difíciles de manejar. Esto explica porqué actualmente se están investigando y mejorando las técnicas de seccionamiento óptico. El principal objetivo del presente trabajo fue evaluar la relevancia de algoritmos de tratamiento de imágenes, los cuales incluyen métodos de preprocesamiento (tales como promediación de imágenes, corrección de background y normalización de intensidades) y procesamiento (desconvolución de desborroneo y de restauración). Esto se realizó mediante la implementación de un algoritmo de cuantificación basado en el Laplaciano y un detector de puntos brillantes. Los algoritmos se aplicaron a un modelo 3D de adhesión celular en piel, basado en un espécimen comúnmente utilizado por nuestro grupo de investigación. Los resultados indican que ciertos métodos de preprocesamiento son requeridos para mejorar el rendimiento de los algoritmos de procesamiento, mientras que otros no deben ser aplicados para asegurar una adecuada y precisa cuantificación.Information extracted from biological specimens is inherently three-dimensional. Though it is sometimes hard to handle, three-dimensional (3D) data provides greater understanding of biological structures and events than its bidimensional (2D) projections. This explains why optical-sectioning techniques are currently being explored and enhanced. The main objective of the present work was to evaluate the relevance of image-treatment algorithms, which included preprocessing (such as image-averaging, background correction and normalization of intensities) and processing (deblurring and restoration deconvolution) methods. This was done by implementing a quantification algorithm based on the Laplacian and a bright-point detector. Algorithms were applied to a 3D cell-adhesion skin model, based upon a specimen commonly used by our research group. Results indicated that certain preprocessing methods are required to enhance the performance of processing algorithms, while others must not be applied in order to ensure an adequate and precise quantification.IV Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
Enhancement of an automatic algorithm for deconvolution and quantification of three-dimensional microscopy images
En trabajos previos hemos diseñado y desarrollado un software para la optimización del procesamiento de imágenes multidimensionales, el cual consiste de un algoritmo automático de desconvolución de restauración (desconvolución con restricción de positividad) y tres indicadores de restauración de la imágenes (Ancho Total a la Mitad del Máximo, Relación Contraste-Ruido y Relación Señal-Ruido) usados para evaluar cuantitativamente la calidad de restauración. Dado que el diseño del algoritmo se implementó en módulos desacoplados, hemos podido incorporar dos nuevos parámetros para evaluar la restauración de imágenes (indicadores tridimensionales basados en la función de Tenegrad) sin realizar cambios significativos en el código. La versión mejorada del algoritmo se utilizó para procesar imágenes tridimensionales utilizando diversas Funciones de Esparcimiento Puntual experimentales; las imágenes se obtuvieron mediante microscopia de campo amplio de fluorescencia del patrón de expresión de E-caderina en la piel de embriones de Rhinella arenarum y de microesferas fluorescentes. Se compararon los indicadores de restauración y el rendimiento de las versiones previa y mejorada del algoritmo. Los resultados indican que los indicadores basados en la función de Tenegrad coinciden con los evaluados previamente y que los nuevos módulos no incrementan significativamente el tiempo de procesamiento.In previous works we designed and developed a software tool for the optimization of multidimensional-image processing, which consisted of an automatic restoration deconvolution method (positive constrained deconvolution) and three image-restoration indicators (Full-Width at Half-Maximum, Contrast-to-Noise Ratio and Signal-to-Noise Ratio) used to assess the quality of restoration qualitatively. Since the algorithm’s design was implemented in uncoupled modules, we were able to introduce two new image-restoration parameters (two three-dimensional Tenegrad-based indicators) without mayor modifications to the script. The enhanced version of the algorithm was used to process raw three-dimensional images using several experimental Point Spread Functions; raw images were obtained by fluorescence wide-field microscopy of epidermal E-cadherin expression in Rhinella arenarum embryos and fluorescent microspheres. The image-restoration indicators and the performance of the previous and enhanced versions of the algorithm were compared. Results show that Tenengrad-based indicators concur with the previously used ones and that the new modules do not increase processing time significantly.Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
Evaluation of 3D image-treatment algorithms applied to optical-sectioning microscopy
La información extraída de especimenes biológicos es inherentemente tridimensional. Los datos tridimensionales (3D) permiten un mejor entendimiento de las estructuras y los eventos biológicos, comparados con sus proyecciones bidimensionales (2D), aunque a veces son más difíciles de manejar. Esto explica porqué actualmente se están investigando y mejorando las técnicas de seccionamiento óptico. El principal objetivo del presente trabajo fue evaluar la relevancia de algoritmos de tratamiento de imágenes, los cuales incluyen métodos de preprocesamiento (tales como promediación de imágenes, corrección de background y normalización de intensidades) y procesamiento (desconvolución de desborroneo y de restauración). Esto se realizó mediante la implementación de un algoritmo de cuantificación basado en el Laplaciano y un detector de puntos brillantes. Los algoritmos se aplicaron a un modelo 3D de adhesión celular en piel, basado en un espécimen comúnmente utilizado por nuestro grupo de investigación. Los resultados indican que ciertos métodos de preprocesamiento son requeridos para mejorar el rendimiento de los algoritmos de procesamiento, mientras que otros no deben ser aplicados para asegurar una adecuada y precisa cuantificación.Information extracted from biological specimens is inherently three-dimensional. Though it is sometimes hard to handle, three-dimensional (3D) data provides greater understanding of biological structures and events than its bidimensional (2D) projections. This explains why optical-sectioning techniques are currently being explored and enhanced. The main objective of the present work was to evaluate the relevance of image-treatment algorithms, which included preprocessing (such as image-averaging, background correction and normalization of intensities) and processing (deblurring and restoration deconvolution) methods. This was done by implementing a quantification algorithm based on the Laplacian and a bright-point detector. Algorithms were applied to a 3D cell-adhesion skin model, based upon a specimen commonly used by our research group. Results indicated that certain preprocessing methods are required to enhance the performance of processing algorithms, while others must not be applied in order to ensure an adequate and precise quantification.IV Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
Enhancement of an automatic algorithm for deconvolution and quantification of three-dimensional microscopy images
En trabajos previos hemos diseñado y desarrollado un software para la optimización del procesamiento de imágenes multidimensionales, el cual consiste de un algoritmo automático de desconvolución de restauración (desconvolución con restricción de positividad) y tres indicadores de restauración de la imágenes (Ancho Total a la Mitad del Máximo, Relación Contraste-Ruido y Relación Señal-Ruido) usados para evaluar cuantitativamente la calidad de restauración. Dado que el diseño del algoritmo se implementó en módulos desacoplados, hemos podido incorporar dos nuevos parámetros para evaluar la restauración de imágenes (indicadores tridimensionales basados en la función de Tenegrad) sin realizar cambios significativos en el código. La versión mejorada del algoritmo se utilizó para procesar imágenes tridimensionales utilizando diversas Funciones de Esparcimiento Puntual experimentales; las imágenes se obtuvieron mediante microscopia de campo amplio de fluorescencia del patrón de expresión de E-caderina en la piel de embriones de Rhinella arenarum y de microesferas fluorescentes. Se compararon los indicadores de restauración y el rendimiento de las versiones previa y mejorada del algoritmo. Los resultados indican que los indicadores basados en la función de Tenegrad coinciden con los evaluados previamente y que los nuevos módulos no incrementan significativamente el tiempo de procesamiento.In previous works we designed and developed a software tool for the optimization of multidimensional-image processing, which consisted of an automatic restoration deconvolution method (positive constrained deconvolution) and three image-restoration indicators (Full-Width at Half-Maximum, Contrast-to-Noise Ratio and Signal-to-Noise Ratio) used to assess the quality of restoration qualitatively. Since the algorithm’s design was implemented in uncoupled modules, we were able to introduce two new image-restoration parameters (two three-dimensional Tenegrad-based indicators) without mayor modifications to the script. The enhanced version of the algorithm was used to process raw three-dimensional images using several experimental Point Spread Functions; raw images were obtained by fluorescence wide-field microscopy of epidermal E-cadherin expression in Rhinella arenarum embryos and fluorescent microspheres. The image-restoration indicators and the performance of the previous and enhanced versions of the algorithm were compared. Results show that Tenengrad-based indicators concur with the previously used ones and that the new modules do not increase processing time significantly.Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI
Publishing data to support the fight against human vector-borne diseases
Vector-borne diseases are responsible for more than 17% of human cases of infectious diseases. In most situations, effective control of debilitating and deadly vector-bone diseases (VBDs), such as malaria, dengue, chikungunya, yellow fever, Zika and Chagas requires up-to-date, robust and comprehensive information on the presence, diversity, ecology, bionomics and geographic spread of the organisms that carry and transmit the infectious agents. Huge gaps exist in the information related to these vectors, creating an essential need for campaigns to mobilise and share data. The publication of data papers is an effective tool for overcoming this challenge. These peer-reviewed articles provide scholarly credit for researchers whose vital work of assembling and publishing well-described, properly-formatted datasets often fails to receive appropriate recognition. To address this, GigaScience 's sister journal GigaByte partnered with the Global Biodiversity Information Facility (GBIF) to publish a series of data papers, with support from the Special Programme for Research and Training in Tropical Diseases (TDR), hosted by the World Health Organisation (WHO). Here we outline the initial results of this targeted approach to sharing data and describe its importance for controlling VBDs and improving public health
Staphylococcus aureus Survives with a Minimal Peptidoglycan Synthesis Machine but Sacrifices Virulence and Antibiotic Resistance
Many important cellular processes are performed by molecular machines, composed of multiple proteins that physically interact to execute biological functions. An example is the bacterial peptidoglycan (PG) synthesis machine, responsible for the synthesis of the main component of the cell wall and the target of many contemporary antibiotics. One approach for the identification of essential components of a cellular machine involves the determination of its minimal protein composition. Staphylococcus aureus is a Gram-positive pathogen, renowned for its resistance to many commonly used antibiotics and prevalence in hospitals. Its genome encodes a low number of proteins with PG synthesis activity (9 proteins), when compared to other model organisms, and is therefore a good model for the study of a minimal PG synthesis machine. We deleted seven of the nine genes encoding PG synthesis enzymes from the S. aureus genome without affecting normal growth or cell morphology, generating a strain capable of PG biosynthesis catalyzed only by two penicillin-binding proteins, PBP1 and the bi-functional PBP2. However, multiple PBPs are important in clinically relevant environments, as bacteria with a minimal PG synthesis machinery became highly susceptible to cell wall-targeting antibiotics, host lytic enzymes and displayed impaired virulence in a Drosophila infection model which is dependent on the presence of specific peptidoglycan receptor proteins, namely PGRP-SA. The fact that S. aureus can grow and divide with only two active PG synthesizing enzymes shows that most of these enzymes are redundant in vitro and identifies the minimal PG synthesis machinery of S. aureus. However a complex molecular machine is important in environments other than in vitro growth as the expendable PG synthesis enzymes play an important role in the pathogenicity and antibiotic resistance of S. aureus
The Changing Landscape for Stroke\ua0Prevention in AF: Findings From the GLORIA-AF Registry Phase 2
Background GLORIA-AF (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation) is a prospective, global registry program describing antithrombotic treatment patterns in patients with newly diagnosed nonvalvular atrial fibrillation at risk of stroke. Phase 2 began when dabigatran, the first non\u2013vitamin K antagonist oral anticoagulant (NOAC), became available. Objectives This study sought to describe phase 2 baseline data and compare these with the pre-NOAC era collected during phase 1. Methods During phase 2, 15,641 consenting patients were enrolled (November 2011 to December 2014); 15,092 were eligible. This pre-specified cross-sectional analysis describes eligible patients\u2019 baseline characteristics. Atrial fibrillation disease characteristics, medical outcomes, and concomitant diseases and medications were collected. Data were analyzed using descriptive statistics. Results Of the total patients, 45.5% were female; median age was 71 (interquartile range: 64, 78) years. Patients were from Europe (47.1%), North America (22.5%), Asia (20.3%), Latin America (6.0%), and the Middle East/Africa (4.0%). Most had high stroke risk (CHA2DS2-VASc [Congestive heart failure, Hypertension, Age 6575 years, Diabetes mellitus, previous Stroke, Vascular disease, Age 65 to 74 years, Sex category] score 652; 86.1%); 13.9% had moderate risk (CHA2DS2-VASc = 1). Overall, 79.9% received oral anticoagulants, of whom 47.6% received NOAC and 32.3% vitamin K antagonists (VKA); 12.1% received antiplatelet agents; 7.8% received no antithrombotic treatment. For comparison, the proportion of phase 1 patients (of N = 1,063 all eligible) prescribed VKA was 32.8%, acetylsalicylic acid 41.7%, and no therapy 20.2%. In Europe in phase 2, treatment with NOAC was more common than VKA (52.3% and 37.8%, respectively); 6.0% of patients received antiplatelet treatment; and 3.8% received no antithrombotic treatment. In North America, 52.1%, 26.2%, and 14.0% of patients received NOAC, VKA, and antiplatelet drugs, respectively; 7.5% received no antithrombotic treatment. NOAC use was less common in Asia (27.7%), where 27.5% of patients received VKA, 25.0% antiplatelet drugs, and 19.8% no antithrombotic treatment. Conclusions The baseline data from GLORIA-AF phase 2 demonstrate that in newly diagnosed nonvalvular atrial fibrillation patients, NOAC have been highly adopted into practice, becoming more frequently prescribed than VKA in Europe and North America. Worldwide, however, a large proportion of patients remain undertreated, particularly in Asia and North America. (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients With Atrial Fibrillation [GLORIA-AF]; NCT01468701
- …