6 research outputs found

    Percepção das cores com ImageJ: elaboração de um guia para análise de imagens de microrganismos produtores de corantes naturais.

    Get PDF
    As cores são vistas pelos olhos humanos que traduzem a informação baseada em suas percepções e preferências. As cores podem ser entendidas como fótons percebidos da luz do dia onde vermelho corresponde a fótons de luz de comprimento longo (baixa frequência), amarelo e verde intermediários e azul comprimento de onda curto (alta frequência). Dessa forma a cor pode ser quantificada e analisada em sistemas de computador para determinar a quantidade de vermelho, verde e azul presente em cada cor. Este trabalho teve como objetivo criar uma estratégia baseada em análise de imagem digital para identificação das cores de microrganismos visando auxiliar a seleção de linhagens produtoras de corantes sem a necessidade de manipulação das amostras

    Automated Tracking of Hand Hygiene Stages

    Get PDF
    The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system

    An automated system for the classification and segmentation of brain tumours in MRI images based on the modified grey level co-occurrence matrix

    Get PDF
    The development of an automated system for the classification and segmentation of brain tumours in MRI scans remains challenging due to high variability and complexity of the brain tumours. Visual examination of MRI scans to diagnose brain tumours is the accepted standard. However due to the large number of MRI slices that are produced for each patient this is becoming a time consuming and slow process that is also prone to errors. This study explores an automated system for the classification and segmentation of brain tumours in MRI scans based on texture feature extraction. The research investigates an appropriate technique for feature extraction and development of a three-dimensional segmentation method. This was achieved by the investigation and integration of several image processing methods that are related to texture features and segmentation of MRI brain scans. First, the MRI brain scans were pre-processed by image enhancement, intensity normalization, background segmentation and correcting the mid-sagittal plane (MSP) of the brain for any possible skewness in the patient’s head. Second, the texture features were extracted using modified grey level co-occurrence matrix (MGLCM) from T2-weighted (T2-w) MRI slices and classified into normal and abnormal using multi-layer perceptron neural network (MLP). The texture feature extraction method starts from the standpoint that the human brain structure is approximately symmetric around the MSP of the brain. The extracted features measure the degree of symmetry between the left and right hemispheres of the brain, which are used to detect the abnormalities in the brain. This will enable clinicians to reject the MRI brain scans of the patients who have normal brain quickly and focusing on those who have pathological brain features. Finally, the bounding 3D-boxes based genetic algorithm (BBBGA) was used to identify the location of the brain tumour and segments it automatically by using three-dimensional active contour without edge (3DACWE) method. The research was validated using two datasets; a real dataset that was collected from the MRI Unit in Al-Kadhimiya Teaching Hospital in Iraq in 2014 and the standard benchmark multimodal brain tumour segmentation (BRATS 2013) dataset. The experimental results on both datasets proved that the efficacy of the proposed system in the successful classification and segmentation of the brain tumours in MRI scans. The achieved classification accuracies were 97.8% for the collected dataset and 98.6% for the standard dataset. While the segmentation’s Dice scores were 89% for the collected dataset and 89.3% for the standard dataset

    Anais...

    Get PDF
    A sexta edição do Encontro, realizada em formato digital, nos dias 24 e 25 de novembro de 2020, pelo canal da Embrapa no YouTube, tem como tema central “Bioprodutos: agregação de valor às agroindústrias”, com especial ênfase em Bioinsumos, e conta com a presença de oito palestrantes convidados, externos ao quadro da Embrapa Agroenergia. O evento deste ano está subdividido em três momentos: I. Simpósio Agroenergia em Foco, com o tema “Biomassa para a Bioeconomia”, composto por duas mesas-redondas que abordam os temas “Bioinsumos” e "Bioprodutos”. II. Sessão de divulgação dos trabalhos científicos do VI EnPI, submetidos em formato de artigo, com apresentações ao vivo em salas de reunião públicas (por meio da ferramenta Google Meet). A sessão de encerramento do evento conta com a divulgação e premiação dos melhores trabalhos de PD&I apresentados nas categorias graduandos, pós-graduandos e profissionais, com o patrocínio da Associação Brasileira de Bioinovação (ABBI). III. Sessão de Transferência de Tecnologia da Embrapa Agroenergia, intitulada InovAR – Diálogos de Inovação Tecnológica, com foco em materiais renováveis.Editoras técnicas: Simone Mendonça, Thaís Fabiana Chan Salum. Realizado em formato digital
    corecore