390 research outputs found

    Improving Maternal and Fetal Cardiac Monitoring Using Artificial Intelligence

    Get PDF
    Early diagnosis of possible risks in the physiological status of fetus and mother during pregnancy and delivery is critical and can reduce mortality and morbidity. For example, early detection of life-threatening congenital heart disease may increase survival rate and reduce morbidity while allowing parents to make informed decisions. To study cardiac function, a variety of signals are required to be collected. In practice, several heart monitoring methods, such as electrocardiogram (ECG) and photoplethysmography (PPG), are commonly performed. Although there are several methods for monitoring fetal and maternal health, research is currently underway to enhance the mobility, accuracy, automation, and noise resistance of these methods to be used extensively, even at home. Artificial Intelligence (AI) can help to design a precise and convenient monitoring system. To achieve the goals, the following objectives are defined in this research: The first step for a signal acquisition system is to obtain high-quality signals. As the first objective, a signal processing scheme is explored to improve the signal-to-noise ratio (SNR) of signals and extract the desired signal from a noisy one with negative SNR (i.e., power of noise is greater than signal). It is worth mentioning that ECG and PPG signals are sensitive to noise from a variety of sources, increasing the risk of misunderstanding and interfering with the diagnostic process. The noises typically arise from power line interference, white noise, electrode contact noise, muscle contraction, baseline wandering, instrument noise, motion artifacts, electrosurgical noise. Even a slight variation in the obtained ECG waveform can impair the understanding of the patient's heart condition and affect the treatment procedure. Recent solutions, such as adaptive and blind source separation (BSS) algorithms, still have drawbacks, such as the need for noise or desired signal model, tuning and calibration, and inefficiency when dealing with excessively noisy signals. Therefore, the final goal of this step is to develop a robust algorithm that can estimate noise, even when SNR is negative, using the BSS method and remove it based on an adaptive filter. The second objective is defined for monitoring maternal and fetal ECG. Previous methods that were non-invasive used maternal abdominal ECG (MECG) for extracting fetal ECG (FECG). These methods need to be calibrated to generalize well. In other words, for each new subject, a calibration with a trustable device is required, which makes it difficult and time-consuming. The calibration is also susceptible to errors. We explore deep learning (DL) models for domain mapping, such as Cycle-Consistent Adversarial Networks, to map MECG to fetal ECG (FECG) and vice versa. The advantages of the proposed DL method over state-of-the-art approaches, such as adaptive filters or blind source separation, are that the proposed method is generalized well on unseen subjects. Moreover, it does not need calibration and is not sensitive to the heart rate variability of mother and fetal; it can also handle low signal-to-noise ratio (SNR) conditions. Thirdly, AI-based system that can measure continuous systolic blood pressure (SBP) and diastolic blood pressure (DBP) with minimum electrode requirements is explored. The most common method of measuring blood pressure is using cuff-based equipment, which cannot monitor blood pressure continuously, requires calibration, and is difficult to use. Other solutions use a synchronized ECG and PPG combination, which is still inconvenient and challenging to synchronize. The proposed method overcomes those issues and only uses PPG signal, comparing to other solutions. Using only PPG for blood pressure is more convenient since it is only one electrode on the finger where its acquisition is more resilient against error due to movement. The fourth objective is to detect anomalies on FECG data. The requirement of thousands of manually annotated samples is a concern for state-of-the-art detection systems, especially for fetal ECG (FECG), where there are few publicly available FECG datasets annotated for each FECG beat. Therefore, we will utilize active learning and transfer-learning concept to train a FECG anomaly detection system with the least training samples and high accuracy. In this part, a model is trained for detecting ECG anomalies in adults. Later this model is trained to detect anomalies on FECG. We only select more influential samples from the training set for training, which leads to training with the least effort. Because of physician shortages and rural geography, pregnant women's ability to get prenatal care might be improved through remote monitoring, especially when access to prenatal care is limited. Increased compliance with prenatal treatment and linked care amongst various providers are two possible benefits of remote monitoring. If recorded signals are transmitted correctly, maternal and fetal remote monitoring can be effective. Therefore, the last objective is to design a compression algorithm that can compress signals (like ECG) with a higher ratio than state-of-the-art and perform decompression fast without distortion. The proposed compression is fast thanks to the time domain B-Spline approach, and compressed data can be used for visualization and monitoring without decompression owing to the B-spline properties. Moreover, the stochastic optimization is designed to retain the signal quality and does not distort signal for diagnosis purposes while having a high compression ratio. In summary, components for creating an end-to-end system for day-to-day maternal and fetal cardiac monitoring can be envisioned as a mix of all tasks listed above. PPG and ECG recorded from the mother can be denoised using deconvolution strategy. Then, compression can be employed for transmitting signal. The trained CycleGAN model can be used for extracting FECG from MECG. Then, trained model using active transfer learning can detect anomaly on both MECG and FECG. Simultaneously, maternal BP is retrieved from the PPG signal. This information can be used for monitoring the cardiac status of mother and fetus, and also can be used for filling reports such as partogram

    Efficient Encoding of Wireless Capsule Endoscopy Images Using Direct Compression of Colour Filter Array Images

    Get PDF
    Since its invention in 2001, wireless capsule endoscopy (WCE) has played an important role in the endoscopic examination of the gastrointestinal tract. During this period, WCE has undergone tremendous advances in technology, making it the first-line modality for diseases from bleeding to cancer in the small-bowel. Current research efforts are focused on evolving WCE to include functionality such as drug delivery, biopsy, and active locomotion. For the integration of these functionalities into WCE, two critical prerequisites are the image quality enhancement and the power consumption reduction. An efficient image compression solution is required to retain the highest image quality while reducing the transmission power. The issue is more challenging due to the fact that image sensors in WCE capture images in Bayer Colour filter array (CFA) format. Therefore, standard compression engines provide inferior compression performance. The focus of this thesis is to design an optimized image compression pipeline to encode the capsule endoscopic (CE) image efficiently in CFA format. To this end, this thesis proposes two image compression schemes. First, a lossless image compression algorithm is proposed consisting of an optimum reversible colour transformation, a low complexity prediction model, a corner clipping mechanism and a single context adaptive Golomb-Rice entropy encoder. The derivation of colour transformation that provides the best performance for a given prediction model is considered as an optimization problem. The low complexity prediction model works in raster order fashion and requires no buffer memory. The application of colour transformation yields lower inter-colour correlation and allows the efficient independent encoding of the colour components. The second compression scheme in this thesis is a lossy compression algorithm with a integer discrete cosine transformation at its core. Using the statistics obtained from a large dataset of CE image, an optimum colour transformation is derived using the principal component analysis (PCA). The transformed coefficients are quantized using optimized quantization table, which was designed with a focus to discard medically irrelevant information. A fast demosaicking algorithm is developed to reconstruct the colour image from the lossy CFA image in the decoder. Extensive experiments and comparisons with state-of-the-art lossless image compression methods establish the superiority of the proposed compression methods as simple and efficient image compression algorithm. The lossless algorithm can transmit the image in a lossless manner within the available bandwidth. On the other hand, performance evaluation of lossy compression algorithm indicates that it can deliver high quality images at low transmission power and low computation costs

    Neuroendoscopy Adapter Module Development for Better Brain Tumor Image Visualization

    Get PDF
    The issue of brain magnetic resonance image exploration together with classification receives a significant awareness in recent years. Indeed, various computer-aided-diagnosis solutions were suggested to support radiologist in decision-making. In this circumstance, adequate image classification is extremely required as it is the most common critical brain tumors which often develop from subdural hematoma cells, which might be common type in adults. In healthcare milieu, brain MRIs are intended for identification of tumor. In this regard, various computerized diagnosis systems were suggested to help medical professionals in clinical decision-making. As per recent problems, Neuroendoscopy is the gold standard intended for discovering brain tumors; nevertheless, typical Neuroendoscopy can certainly overlook ripped growths. Neuroendoscopy is a minimally-invasive surgical procedure in which the neurosurgeon removes the tumor through small holes in the skull or through the mouth or nose. Neuroendoscopy enables neurosurgeons to access areas of the brain that cannot be reached with traditional surgery to remove the tumor without cutting or harming other parts of the skull. We focused on finding out whether or not visual images of tumor ripped lesions ended up being much better by auto fluorescence image resolution as well as narrow-band image resolution graphic evaluation jointly with the latest neuroendoscopy technique. Also, within the last several years, pathology labs began to proceed in the direction of an entirely digital workflow, using the electronic slides currently being the key element of this technique. Besides lots of benefits regarding storage as well as exploring capabilities with the image information, among the benefits of electronic slides is that they can help the application of image analysis approaches which seek to develop quantitative attributes to assist pathologists in their work. However, systems also have some difficulties in execution and handling. Hence, such conventional method needs automation. We developed and employed to look for the targeted importance along with uncovering the best-focused graphic position by way of aliasing search method incorporated with new Neuroendoscopy Adapter Module (NAM) technique

    Optimización del diseño estructural de pavimentos asfálticos para calles y carreteras

    Get PDF
    gráficos, tablasThe construction of asphalt pavements in streets and highways is an activity that requires optimizing the consumption of significant economic and natural resources. Pavement design optimization meets contradictory objectives according to the availability of resources and users’ needs. This dissertation explores the application of metaheuristics to optimize the design of asphalt pavements using an incremental design based on the prediction of damage and vehicle operating costs (VOC). The costs are proportional to energy and resource consumption and polluting emissions. The evolution of asphalt pavement design and metaheuristic optimization techniques on this topic were reviewed. Four computer programs were developed: (1) UNLEA, a program for the structural analysis of multilayer systems. (2) PSO-UNLEA, a program that uses particle swarm optimization metaheuristic (PSO) for the backcalculation of pavement moduli. (3) UNPAVE, an incremental pavement design program based on the equations of the North American MEPDG and includes the computation of vehicle operating costs based on IRI. (4) PSO-PAVE, a PSO program to search for thicknesses that optimize the design considering construction and vehicle operating costs. The case studies show that the backcalculation and structural design of pavements can be optimized by PSO considering restrictions in the thickness and the selection of materials. Future developments should reduce the computational cost and calibrate the pavement performance and VOC models. (Texto tomado de la fuente)La construcción de pavimentos asfálticos en calles y carreteras es una actividad que requiere la optimización del consumo de cuantiosos recursos económicos y naturales. La optimización del diseño de pavimentos atiende objetivos contradictorios de acuerdo con la disponibilidad de recursos y las necesidades de los usuarios. Este trabajo explora el empleo de metaheurísticas para optimizar el diseño de pavimentos asfálticos empleando el diseño incremental basado en la predicción del deterioro y los costos de operación vehicular (COV). Los costos son proporcionales al consumo energético y de recursos y las emisiones contaminantes. Se revisó la evolución del diseño de pavimentos asfálticos y el desarrollo de técnicas metaheurísticas de optimización en este tema. Se desarrollaron cuatro programas de computador: (1) UNLEA, programa para el análisis estructural de sistemas multicapa. (2) PSO-UNLEA, programa que emplea la metaheurística de optimización con enjambre de partículas (PSO) para el cálculo inverso de módulos de pavimentos. (3) UNPAVE, programa de diseño incremental de pavimentos basado en las ecuaciones de la MEPDG norteamericana, y el cálculo de costos de construcción y operación vehicular basados en el IRI. (4) PSO-PAVE, programa que emplea la PSO en la búsqueda de espesores que permitan optimizar el diseño considerando los costos de construcción y de operación vehicular. Los estudios de caso muestran que el cálculo inverso y el diseño estructural de pavimentos pueden optimizarse mediante PSO considerando restricciones en los espesores y la selección de materiales. Los desarrollos futuros deben enfocarse en reducir el costo computacional y calibrar los modelos de deterioro y COV.DoctoradoDoctor en Ingeniería - Ingeniería AutomáticaDiseño incremental de pavimentosEléctrica, Electrónica, Automatización Y Telecomunicacione

    Diabetic retinopathy diagnosis through multi-agent approaches

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDiabetic retinopathy has been revealed as a serious public health problem in occidental world, since it is the most common cause of vision impairment among people of working age. The early diagnosis and an adequate treatment can prevent loss of vision. Thus, a regular screening program to detect diabetic retinopathy in the early stages could be efficient for the prevention of blindness. Due to its characteristics, digital color fundus photographs have been the preferred eye examination method adopted in these programs. Nevertheless, due to the growing incidence of diabetes in population, ophthalmologists have to observe a huge number of images. Therefore, the development of computational tools that can assist the diagnosis is of major importance. Several works have been published in the recent past years for this purpose; but an automatic system for clinical practice has yet to come. In general, these algorithms are used to normalize, segment and extract information from images to be utilized by classifiers which aim to classify the regions of the fundus image. These methods are mostly based on global approaches that cannot be locally adapted to the image properties and therefore, none of them perform as needed because of fundus images complexity. This thesis focuses on the development of new tools based on multi-agent approaches, to assist the diabetic retinopathy early diagnosis. The fundus image automatic segmentation concerning the diabetic retinopathy diagnosis should comprise both pathological (dark and bright lesions) and anatomical features (optic disc, blood vessels and fovea). In that way, systems for the optic disc detection, bright lesions segmentation, blood vessels segmentation and dark lesions segmentation were implemented and, when possible, compared to those approaches already described in literature. Two kinds of agent based systems were investigated and applied to digital color fundus photographs: ant colony system and multi-agent system composed of reactive agents with interaction mechanisms between them. The ant colony system was used to the optic disc detection and for bright lesion segmentation. Multi-agent system models were developed for the blood vessel segmentation and for small dark lesion segmentation. The multi-agent system models created in this study are not image processing techniques on their own, but they are used as tools to improve the traditional algorithms results at the micro level. The results of all the proposed approaches are very promising and reveal that the systems created perform better than other recent methods described in the literature. Therefore, the main scientific contribution of this thesis is to prove that multi-agent systems based approaches can be efficient in segmenting structures in retinal images. Such an approach overcomes the classic image processing algorithms that are limited to macro results and do not consider the local characteristics of images. Hence, multi-agent systems based approaches could be a fundamental tool, responsible for a very efficient system development to be used in screening programs concerning diabetic retinopathy early diagnosis.A retinopatia diabética tem-se revelado como um problema sério de saúde pública no mundo ocidental, uma vez que é a principal causa de cegueira entre as pessoas em idade ativa. Contudo, a perda de visão pode ser prevenida através da deteção precoce da doença e de um tratamento adequado. Por isso, um programa regular de rastreio e monitorização da retinopatia diabética pode ser eficiente na prevenção da deterioração da visão. Devido às suas características, a fotografia digital colorida do fundo do olho tem sido o exame adotado neste tipo de programas. No entanto, devido ao aumento da incidência da diabetes na população, o número de imagens a serem analisadas pelos oftalmologistas é elevado. Assim sendo, é muito importante o desenvolvimento de ferramentas computacionais para auxiliar no diagnóstico desta patologia. Nos últimos anos, têm sido vários os trabalhos publicados com este propósito; porém, não existe ainda um sistema automático (ou recomendável) para ser usado nas práticas clínicas. No geral, estes algoritmos são usados para normalizar, segmentar e extrair informação das imagens que vai ser utilizada por classificadores, cujo objetivo é identificar as regiões da imagem que se procuram. Estes métodos são maioritariamente baseados em abordagens globais que não podem ser localmente adaptadas às propriedades das imagens e, portanto, nenhum apresenta a performance necessária devido à complexidade das imagens do fundo do olho. Esta tese foca-se no desenvolvimento de novas ferramentas computacionais baseadas em sistemas multi-agente, para auxiliar na deteção precoce da retinopatia diabética. A segmentação automática das imagens do fundo do olho com o objetivo de diagnosticar a retinopatia diabética, deve englobar características patológicas (lesões claras e escuras) e anatómicas (disco ótico, vasos sanguíneos e fóvea). Deste modo, foram criados sistemas para a deteção do disco ótico e para a segmentação das lesões claras, dos vasos sanguíneos e das lesões escuras e, quando possível, estes foram comparados com abordagens já descritas na literatura. Dois tipos de sistemas baseados em agentes foram investigados e aplicados nas imagens digitais coloridas do fundo do olho: sistema de colónia de formigas e sistema multi-agente constituído por agentes reativos e com mecanismos de interação entre eles. O sistema de colónia de formigas foi usado para a deteção do disco ótico e para a segmentação das lesões claras. Modelos de sistemas multi-agente foram desenvolvidos para a segmentação dos vasos sanguíneos e das lesões escuras. Os modelos multi-agentes criados ao longo deste estudo não são por si só técnicas de processamento de imagem, mas são sim usados como ferramentas para melhorar os resultados dos algoritmos tradicionais no baixo nível. Os resultados de todas as abordagens propostas são muito promissores e revelam que os sistemas criados apresentam melhor performance que outras abordagens recentes descritas na literatura. Posto isto, a maior contribuição científica desta tese é provar que abordagens baseadas em sistemas multi-agente podem ser eficientes na segmentação de estruturas em imagens da retina. Uma abordagem deste tipo ultrapassa os algoritmos clássicos de processamento de imagem, que se limitam aos resultados de alto nível e não têm em consideração as propriedades locais das imagens. Portanto, as abordagens baseadas em sistemas multi-agente podem ser uma ferramenta fundamental, responsável pelo desenvolvimento de um sistema eficiente para ser usado nos programas de rastreio e monitorização da retinopatia diabética.Work supported by FEDER funds through the "Programa Operacional Factores de Competitividade – COMPETE" and by national funds by FCT- Fundação para a Ciência e a Tecnologia. C. Pereira thanks the FCT for the SFRH / BD / 61829 / 2009 grant

    Chemometric tools for automated method-development and data interpretation in liquid chromatography

    Get PDF
    The thesis explores the challenges and advancements in the field of liquid chromatography (LC), particularly focusing on complex sample analysis using high-resolution mass spectrometry (MS) and two-dimensional (2D) LC techniques. The research addresses the need for efficient optimization and data-handling strategies in modern LC practice. The thesis is divided into several chapters, each addressing specific aspects of LC and polymer analysis. Chapter 2 provides an overview of the need for chemometric tools in LC practice, discussing methods for processing and analyzing data from 1D and 2D-LC systems and how chemometrics can be utilized for method development and optimization. Chapter 3 introduces a novel approach for interpreting the molecular-weight distribution and intrinsic viscosity of polymers, allowing quantitative analysis of polymer properties without prior knowledge of their interactions. This method correlates the curvature parameter of the Mark-Houwink plot with the polymer's structural and chemical properties. Chapters 4 and 5 focus on the analysis of cellulose ethers (CEs), essential in various industrial applications. A new method is presented for mapping the substitution degree and composition of CE samples, providing detailed compositional distributions. Another method involves a comprehensive 2D LC-MS/MS approach for analyzing hydroxypropyl methyl cellulose (HPMC) monomers, revealing subtle differences in composition between industrial HPMC samples. Chapter 6 introduces AutoLC, an algorithm for automated and interpretive development of 1D-LC separations. It uses retention modeling and Bayesian optimization to achieve optimal separation within a few iterations, significantly improving the efficiency of gradient LC separations. Chapter 7 focuses on the development of an open-source algorithm for automated method development in 2D-LC-MS systems. This algorithm improves separation performance by refining gradient profiles and accurately predicting peak widths, enhancing the reliability of complex gradient LC separations. Chapter 8 addresses the challenge of gradient deformation in LC instruments. An algorithm based on the stable function corrects instrument-specific gradient deformations, enabling accurate determination of analyte retention parameters and improving data comparability between different sources. Chapter 9 introduces a novel approach using capacitively-coupled-contactless-conductivity detection (C4D) to measure gradient profiles without adding tracer components. This method enhances inter-system transferability of retention models for polymers, overcoming the limitations of UV-absorbance detectable tracer components. Chapter 10 discusses practical choices and challenges faced in the thesis chapters, highlighting the need for well-defined standard samples in industrial polymer analysis and emphasizing the importance of generalized problem-solving approaches. The thesis identifies future research directions, emphasizing the importance of computational-assisted methods for polymer analysis, the utilization of online reaction modulation techniques, and exploring continuous distributions obtained through size-exclusion chromatography (SEC) in conjunction with triple detection. Chemometric tools are recognized as essential for gaining deeper insights into polymer chemistry and improving data interpretation in the field of LC

    A constraint-based approach for assessing the capabilities of existing designs to handle product variation

    Get PDF
    All production machinery is designed with an inherent capability to handle slight variations in product. This is initially achieved by simply providing adjustments to allow, for example, changes that occur in pack sizes to be accommodated, through user settings or complete sets of change parts. By the appropriate use of these abilities most variations in product can be handled. However when extreme conditions of setups, major changes in product size and configuration, are considered there is no guarantee that the existing machines are able to cope. The problem is even more difficult to deal with when completely new product families are proposed to be made on an existing product line. Such changes in product range are becoming more common as producers respond to demands for ever increasing customization and product differentiation. An issue exists due to the lack of knowledge on the capabilities of the machines being employed. This often forces the producer to undertake a series of practical product trials. These however can only be undertaken once the product form has been decided and produced in sufficient numbers. There is then little opportunity to make changes that could greatly improve the potential output of the line and reduce waste. There is thus a need for a supportive modelling approach that allows the effect of variation in products to be analyzed together with an understanding of the manufacturing machine capability. Only through their analysis and interaction can the capabilities be fully understood and refined to make production possible. This thesis presents a constraint-based approach that offers a solution to the problems above. While employing this approach it has been shown that, a generic process can be formed to identify the limiting factors (constraints) of variant products to be processed. These identified constraints can be mapped to form the potential limits of performance for the machine. The limits of performance of a system (performance envelopes) can be employed to assess the design capability to cope with product variation. The approach is successfully demonstrated on three industrial case studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    A constraint-based approach for assessing the capabilities of existing designs to handle product variation

    Get PDF
    All production machinery is designed with an inherent capability to handle slight variations in product. This is initially achieved by simply providing adjustments to allow, for example, changes that occur in pack sizes to be accommodated, through user settings or complete sets of change parts. By the appropriate use of these abilities most variations in product can be handled. However when extreme conditions of setups, major changes in product size and configuration, are considered there is no guarantee that the existing machines are able to cope. The problem is even more difficult to deal with when completely new product families are proposed to be made on an existing product line. Such changes in product range are becoming more common as producers respond to demands for ever increasing customization and product differentiation. An issue exists due to the lack of knowledge on the capabilities of the machines being employed. This often forces the producer to undertake a series of practical product trials. These however can only be undertaken once the product form has been decided and produced in sufficient numbers. There is then little opportunity to make changes that could greatly improve the potential output of the line and reduce waste. There is thus a need for a supportive modelling approach that allows the effect of variation in products to be analyzed together with an understanding of the manufacturing machine capability. Only through their analysis and interaction can the capabilities be fully understood and refined to make production possible. This thesis presents a constraint-based approach that offers a solution to the problems above. While employing this approach it has been shown that, a generic process can be formed to identify the limiting factors (constraints) of variant products to be processed. These identified constraints can be mapped to form the potential limits of performance for the machine. The limits of performance of a system (performance envelopes) can be employed to assess the design capability to cope with product variation. The approach is successfully demonstrated on three industrial case studies.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Ultrasound image processing in the evaluation of labor induction failure risk

    Get PDF
    Labor induction is defined as the artificial stimulation of uterine contractions for the purpose of vaginal birth. Induction is prescribed for medical and elective reasons. Success in labor induction procedures is related to vaginal delivery. Cesarean section is one of the potential risks of labor induction as it occurs in about 20% of the inductions. A ripe cervix (soft and distensible) is needed for a successful labor. During the ripening cervical, tissues experience micro structural changes: collagen becomes disorganized and water content increases. These changes will affect the interaction between cervical tissues and sound waves during ultrasound transvaginal scanning and will be perceived as gray level intensity variations in the echographic image. Texture analysis can be used to analyze these variations and provide a means to evaluate cervical ripening in a non-invasive way

    Classification of squamous cell cervical cytology

    Get PDF
    Cervical cancer occurs significantly in women in developing countries every day and produces a high number of casualties, with a large economic and social cost. The World Health Organization, in the right against cervical cancer, promotes early detection screening programs by difeerent detection techniques such as conventional cytology (Pap), cytology liquid medium (CML), DNA test Human Papillomavirus (HPV), staining with dilute acetic acid and Lugol's iodine solution. Conventional cytology is the most used technique, being widely accepted, inexpensive, and with quality control mechanisms. The test has shown a sensitivity of 38% to 84% and a specificity of 90% in multiple studies and has been considered as the choice test for screening [14]. The cervical cancer is not a public health problems in developed countries since more than three decades, among others because of implementation of other tests such as the CML which has increased the sensitivity to a figures that vary between 76% and 99 %. This test in particular produces a thin monolayer of cells that are examined. In our countries this technique is really far from being applied because of its high cost. In consequence, the conventional cytology has remained in practice as the only possible examination of the cervix pathology. In this technique, a sample of cells from the transformation zone of the cervix is taken, using a brush or wooden spatula, spread onto a slide and fixed with a preservative solution. This sample is then sent to a laboratory for staining and microscopic examination to determine whether cells are normal or not. This task requires time and expertise for the diagnosis. Attempting to alleviate the work burden from the number of examinations in clinical routine scenario, some researchers have proposed the development of computational tools to detect and classify the cells of the transformation cervix zone. In the present work the transformation zone is firstly characterized using color and texture descriptors defined in the MPEG-7 standard, and the tissue descriptors are used as the input to a bank of binary classifiers, obtaining a precision of 90% and a sensitivity of 83 %. Unlike traditional approaches that extract cell features from previously segmented cells, the present strategy is completely independent of the particular shape. Yet most works in the domain report higher precision rates, the images used in these works for training and evaluation are really different from what is obtained in the cytology laboratories in Colombia. Overall, most of these methods are applied to monolayer techniques and therefore the recognition rates are better from what we found in the present investigation. However, the main aim of the present work was thus to develop a strategy applicable to our real conditions as a pre-screening method, case in which the method should be robust to many random factors that contaminate the image capture. A segmentation strategy is very easily misleaded by all these factor so that our method should use characteristics independently of the segmentation quality, while the reading time is minimized, as well as the intra-observer variability, facilitating thereby real application of such screening tools.Maestrí
    corecore