238 research outputs found

    Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    Get PDF
    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions

    The Role of Ecological Interactions in Polymicrobial Biofilms and their Contribution to Multiple Antibiotic Resistance

    Get PDF
    The primary objectives of this research were to demonstrate that: 1.) antibiotic resistant bacteria can promote the survival of antibiotic sensitive organisms when grown simultaneously as biofilms in antibiotics, 2.) community-level multiple antibiotic resistance of polymicrobial consortia can lead to biofilm formation despite the presence of multiple antibiotics, and 3.) biofilms may benefit plasmid retention and heterologous protein production in the absence of selective pressure. Quantitative analyses of confocal data showed that ampicillin resistant organisms supported populations of ampicillin sensitive organisms in steady state ampicillin concentrations 13 times greater than that which would inhibit sensitive cells inoculated alone. The rate of reaction of the resistance mechanism influenced the degree of protection. Spectinomycin resistant organisms did not support their sensitive counterparts, although flow cytometry indicated that GFP production by the sensitive strain was improved. When both organisms were grown in both antibiotics, larger numbers of substratum-attached pairs at 2 hours resulted in greater biofilm formation at 48 hours. For biofilms grown in both antibiotics, a benefit to spectinomycin resistant organism’s population size was detectable, but the only benefit to ampicillin resistant organisms was in terms of GFP production. Additionally, an initial attachment ratio of 5 spectinomycin resistant organisms to 1 ampicillin resistant organism resulted in optimal biofilm formation at 48 hours. Biofilms also enhanced the stability of high-copy number plasmids and heterologous protein production. In the absence of antibiotic selective pressure, plasmid DNA was not detected after 48 hours in chemostats, where the faster growth rate of plasmid-free cells contributed to the washout of plasmid retaining cells. The plasmid copy number per cell in biofilms grown without antibiotic selective pressure steadily increased over a six day period. Flow cytometric monitoring of bacteria grown in biofilms indicated that 95 percent of the population was producing GFP at 48 hours. This research supports the idea that ecological interactions between bacteria contribute to biofilm development in the presence of antibiotics, and demonstrates that community-level multiple antibiotic resistance is a factor in biofilm recalcitrance against antibiotics. Additionally, biofilms may provide an additional tool for stabilizing high copy number plasmids used for heterologous protein production

    Image Segmentation of Bacterial Cells in Biofilms

    Get PDF
    Bacterial biofilms are three-dimensional cell communities that live embedded in a self-produced extracellular matrix. Due to the protective properties of the dense coexistence of microorganisms, single bacteria inside the communities are hard to eradicate by antibacterial agents and bacteriophages. This increased resilience gives rise to severe problems in medical and technological settings. To fight the bacterial cells, an in-detail understanding of the underlying mechanisms of biofilm formation and development is required. Due to spatio-temporal variances in environmental conditions inside a single biofilm, the mechanisms can only be investigated by probing single-cells at different locations over time. Currently, the mechanistic information is primarily encoded in volumetric image data gathered with confocal fluorescence microscopy. To quantify features of the single-cell behaviour, single objects need to be detected. This identification of objects inside biofilm image data is called segmentation and is a key step for the understanding of the biological processes inside biofilms. In the first part of this work, a user-friendly computer program is presented which simplifies the analysis of bacterial biofilms. It provides a comprehensive set of tools to segment, analyse, and visualize fluorescent microscopy data without writing a single line of analysis code. This allows for faster feedback loops between experiment and analysis, and allows fast insights into the gathered data. The single-cell segmentation accuracy of a recent segmentation algorithm is discussed in detail. In this discussion, points for improvements are identified and a new optimized segmentation approach presented. The improved algorithm achieves superior segmentation accuracy on bacterial biofilms when compared to the current state-of-the-art algorithms. Finally, the possibility of deep learning-based end-to-end segmentation of biofilm data is investigated. A method for the quick generation of training data is presented and the results of two single-cell segmentation approaches for eukaryotic cells are adapted for the segmentation of bacterial biofilm segmentation.Bakterielle Biofilme sind drei-dimensionale Zellcluster, welche ihre eigene Matrix produzieren. Die selbst-produzierte Matrix bietet den Zellen einen gemeinschaftlichen Schutz vor Ă€ußeren Stressfaktoren. Diese Stressfaktoren können abiotischer Natur sein wie z.B. Temperatur- und NĂ€hrstoff\- schwankungen, oder aber auch biotische Faktoren wie z.B. Antibiotikabehandlung oder Bakteriophageninfektionen. Dies fĂŒhrt dazu, dass einzelne Zelle innerhalb der mikrobiologischen Gemeinschaften eine erhöhte WiderstandsfĂ€higkeit aufweisen und eine große Herausforderung fĂŒr Medizin und technische Anwendungen darstellen. Um Biofilme wirksam zu bekĂ€mpfen, muss man die dem Wachstum und Entwicklung zugrundeliegenden Mechanismen entschlĂŒsseln. Aufgrund der hohen Zelldichte innerhalb der Gemeinschaften sind die Mechanismen nicht rĂ€umlich und zeitlich invariant, sondern hĂ€ngen z.B. von Metabolit-, NĂ€hrstoff- und Sauerstoffgradienten ab. Daher ist es fĂŒr die Beschreibung unabdingbar Beobachtungen auf Einzelzellebene durchzufĂŒhren. FĂŒr die nicht-invasive Untersuchung von einzelnen Zellen innerhalb eines Biofilms ist man auf konfokale Fluoreszenzmikroskopie angewiesen. Um aus den gesammelten, drei-dimensionalen Bilddaten Zelleigenschaften zu extrahieren, ist die Erkennung von den jeweiligen Zellen erforderlich. Besonders die digitale Rekonstruktion der Zellmorphologie spielt dabei eine große Rolle. Diese erhĂ€lt man ĂŒber die Segmentierung der Bilddaten. Dabei werden einzelne Bildelemente den abgebildeten Objekten zugeordnet. Damit lassen sich die einzelnen Objekte voneinander unterscheiden und deren Eigenschaften extrahieren. Im ersten Teil dieser Arbeit wird ein benutzerfreundliches Computerprogramm vorgestellt, welches die Segmentierung und Analyse von Fluoreszenzmikroskopiedaten wesentlich vereinfacht. Es stellt eine umfangreiche Auswahl an traditionellen Segmentieralgorithmen, Parameterberechnungen und Visualisierungsmöglichkeiten zur VerfĂŒgung. Alle Funktionen sind ohne Programmierkenntnisse zugĂ€nglich, sodass sie einer großen Gruppe von Benutzern zur VerfĂŒgung stehen. Die implementierten Funktionen ermöglichen es die Zeit zwischen durchgefĂŒhrtem Experiment und vollendeter Datenanalyse signifikant zu verkĂŒrzen. Durch eine schnelle Abfolge von stetig angepassten Experimenten können in kurzer Zeit schnell wissenschaftliche Einblicke in Biofilme gewonnen werden.\\ Als ErgĂ€nzung zu den bestehenden Verfahren zur Einzelzellsegmentierung in Biofilmen, wird eine Verbesserung vorgestellt, welche die Genauigkeit von bisherigen Filter-basierten Algorithmen ĂŒbertrifft und einen weiteren Schritt in Richtung von zeitlich und rĂ€umlich aufgelöster Einzelzellverfolgung innerhalb bakteriellen Biofilme darstellt. Abschließend wird die Möglichkeit der Anwendung von Deep Learning Algorithmen fĂŒr die Segmentierung in Biofilmen evaluiert. Dazu wird eine Methode vorgestellt welche den Annotationsaufwand von Trainingsdaten im Vergleich zu einer vollstĂ€ndig manuellen Annotation drastisch verkĂŒrzt. Die erstellten Daten werden fĂŒr das Training von Algorithmen eingesetzt und die Genauigkeit der Segmentierung an experimentellen Daten untersucht

    Finding Binding Sites in ChIP-Seq Data via a Linear-time Multi-level Thresholding Algorithm

    Get PDF
    Chromatin immunoprecipitation (ChIP-Seq) has emerged as a superior alternative to microarray technology as it provides higher resolution, less noise, greater coverage and wider dynamic range. While ChIP-Seq enables probing of DNA-protein interaction over the entire genome, it requires the use of sophisticated tools to recognize hidden patterns and extract meaningful data. Over the years, various attempts have resulted in several algorithms making use of different heuristics to accurately determine individual peaks corresponding to unique DNA-protein binding sites. However, finding all the binding sites with high accuracy in a reasonable time is still a challenge. In this work, we propose the use of Multi-level thresholding algorithm, which we call LinMLTBS, used to identify the enriched regions on ChIP-Seq data. Although various suboptimal heuristics have been proposed for multi-level thresholding, we emphasize on the use of an algorithm capable of obtaining an optimal solution, while maintaining linear-time complexity. Testing various algorithm on various ENCODE project datasets shows that our approach attains higher accuracy relative to previously proposed peak finders while retaining a reasonable processing speed

    A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    Get PDF
    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system

    Intraoperative Quantification of Bone Perfusion in Lower Extremity Injury Surgery

    Get PDF
    Orthopaedic surgery is one of the most common surgical categories. In particular, lower extremity injuries sustained from trauma can be complex and life-threatening injuries that are addressed through orthopaedic trauma surgery. Timely evaluation and surgical debridement following lower extremity injury is essential, because devitalized bones and tissues will result in high surgical site infection rates. However, the current clinical judgment of what constitutes “devitalized tissue” is subjective and dependent on surgeon experience, so it is necessary to develop imaging techniques for guiding surgical debridement, in order to control infection rates and to improve patient outcome. In this thesis work, computational models of fluorescence-guided debridement in lower extremity injury surgery will be developed, by quantifying bone perfusion intraoperatively using Dynamic contrast-enhanced fluorescence imaging (DCE-FI) system. Perfusion is an important factor of tissue viability, and therefore quantifying perfusion is essential for fluorescence-guided debridement. In Chapters 3-7 of this thesis, we explore the performance of DCE-FI in quantifying perfusion from benchtop to translation: We proposed a modified fluorescent microsphere quantification technique using cryomacrotome in animal model. This technique can measure bone perfusion in periosteal and endosteal separately, and therefore to validate bone perfusion measurements obtained by DCE-FI; We developed pre-clinical rodent contaminated fracture model to correlate DCE-FI with infection risk, and compare with multi-modality scanning; Furthermore in clinical studies, we investigated first-pass kinetic parameters of DCE-FI and arterial input functions for characterization of perfusion changes during lower limb amputation surgery; We conducted the first in-human use of dynamic contrast-enhanced texture analysis for orthopaedic trauma classification, suggesting that spatiotemporal features from DCE-FI can classify bone perfusion intraoperatively with high accuracy and sensitivity; We established clinical machine learning infection risk predictive model on open fracture surgery, where pixel-scaled prediction on infection risk will be accomplished. In conclusion, pharmacokinetic and spatiotemporal patterns of dynamic contrast-enhanced imaging show great potential for quantifying bone perfusion and prognosing bone infection. The thesis work will decrease surgical site infection risk and improve successful rates of lower extremity injury surgery

    Automated Teeth Extraction and Dental Caries Detection in Panoramic X-ray

    Get PDF
    Dental caries is one of the most chronic diseases that involves the majority of people at least once during their lifetime. This expensive disease accounts for 5-10% of the healthcare budget in developing countries. Caries lesions appear as the result of dental biofi lm metabolic activity, caused by bacteria (most prominently Streptococcus mutans) feeding on uncleaned sugars and starches in oral cavity. Also known as tooth decay, they are primarily diagnosed by general dentists solely based on clinical assessments. Since in many cases dental problems cannot be detected with simple observations, dental x-ray imaging is introduced as a standard tool for domain experts, i.e. dentists and radiologists, to distinguish dental diseases, such as proximal caries. Among different dental radiography methods, Panoramic or Orthopantomogram (OPG) images are commonly performed as the initial step toward assessment. OPG images are captured with a small dose of radiation and can depict the entire patient dentition in a single image. Dental caries can sometimes be hard to identify by general dentists relying only on their visual inspection using dental radiography. Tooth decays can easily be misinterpreted as shadows due to various reasons, such as low image quality. Besides, OPG images have poor quality and structures are not presented with strong edges due to low contrast, uneven exposure, etc. Thus, disease detection is a very challenging task using Panoramic radiography. With the recent development of Artificial Intelligence (AI) in dentistry, and with the introduction of Convolutional Neural Network (CNN) for image classification, developing medical decision support systems is becoming a topic of interest in both academia and industry. Providing more accurate decision support systems using CNNs to assist dentists can enhance their diagnosis performance, resulting in providing improved dental care assistance for patients. In the following thesis, the first automated teeth extraction system for Panoramic images, using evolutionary algorithms, is proposed. In contrast to other intraoral radiography methods, Panoramic is captured with x-ray film outside the patient mouth. Therefore, Panoramic x-rays contain regions outside of the jaw, which make teeth segmentation extremely difficult. Considering that we solely need an image of each tooth separately to build a caries detection model, segmentation of teeth from the OPG image is essential. Due to the absence of significant pixel intensity difference between different regions in OPG radiography, teeth segmentation becomes very hard to implement. Consequently, an automated system is introduced to get an OPG as input and gives images of single teeth as the output. Since only a few research studies are utilizing similar task for Panoramic radiography, there is room for improvement. A genetic algorithm is applied along with different image processing methods to perform teeth extraction by jaw extraction, jaw separation, and teeth-gap valley detection, respectively. The proposed system is compared to the state-of-the-art in teeth extraction on other image types. After teeth are segmented from each image, a model based on various untrained and pretrained CNN-based architectures is proposed to detect dental caries for each tooth. Autoencoder-based model along with famous CNN architectures are used for feature extraction, followed by capsule networks to perform classification. The dataset of Panoramic x-rays is prepared by the authors, with help from an expert radiologist to provide labels. The proposed model has demonstrated an acceptable detection rate of 86.05%, and an increase in caries detection speed. Considering the challenges of performing such task on low quality OPG images, this work is a step towards developing a fully automated efficient caries detection model to assist domain experts

    Automated Teeth Extraction and Dental Caries Detection in Panoramic X-ray

    Get PDF
    Dental caries is one of the most chronic diseases that involves the majority of people at least once during their lifetime. This expensive disease accounts for 5-10% of the healthcare budget in developing countries. Caries lesions appear as the result of dental biofi lm metabolic activity, caused by bacteria (most prominently Streptococcus mutans) feeding on uncleaned sugars and starches in oral cavity. Also known as tooth decay, they are primarily diagnosed by general dentists solely based on clinical assessments. Since in many cases dental problems cannot be detected with simple observations, dental x-ray imaging is introduced as a standard tool for domain experts, i.e. dentists and radiologists, to distinguish dental diseases, such as proximal caries. Among different dental radiography methods, Panoramic or Orthopantomogram (OPG) images are commonly performed as the initial step toward assessment. OPG images are captured with a small dose of radiation and can depict the entire patient dentition in a single image. Dental caries can sometimes be hard to identify by general dentists relying only on their visual inspection using dental radiography. Tooth decays can easily be misinterpreted as shadows due to various reasons, such as low image quality. Besides, OPG images have poor quality and structures are not presented with strong edges due to low contrast, uneven exposure, etc. Thus, disease detection is a very challenging task using Panoramic radiography. With the recent development of Artificial Intelligence (AI) in dentistry, and with the introduction of Convolutional Neural Network (CNN) for image classification, developing medical decision support systems is becoming a topic of interest in both academia and industry. Providing more accurate decision support systems using CNNs to assist dentists can enhance their diagnosis performance, resulting in providing improved dental care assistance for patients. In the following thesis, the first automated teeth extraction system for Panoramic images, using evolutionary algorithms, is proposed. In contrast to other intraoral radiography methods, Panoramic is captured with x-ray film outside the patient mouth. Therefore, Panoramic x-rays contain regions outside of the jaw, which make teeth segmentation extremely difficult. Considering that we solely need an image of each tooth separately to build a caries detection model, segmentation of teeth from the OPG image is essential. Due to the absence of significant pixel intensity difference between different regions in OPG radiography, teeth segmentation becomes very hard to implement. Consequently, an automated system is introduced to get an OPG as input and gives images of single teeth as the output. Since only a few research studies are utilizing similar task for Panoramic radiography, there is room for improvement. A genetic algorithm is applied along with different image processing methods to perform teeth extraction by jaw extraction, jaw separation, and teeth-gap valley detection, respectively. The proposed system is compared to the state-of-the-art in teeth extraction on other image types. After teeth are segmented from each image, a model based on various untrained and pretrained CNN-based architectures is proposed to detect dental caries for each tooth. Autoencoder-based model along with famous CNN architectures are used for feature extraction, followed by capsule networks to perform classification. The dataset of Panoramic x-rays is prepared by the authors, with help from an expert radiologist to provide labels. The proposed model has demonstrated an acceptable detection rate of 86.05%, and an increase in caries detection speed. Considering the challenges of performing such task on low quality OPG images, this work is a step towards developing a fully automated efficient caries detection model to assist domain experts
    • 

    corecore