16,856 research outputs found

    Automatic Color Inspection for Colored Wires in Electric Cables

    Get PDF
    In this paper, an automatic optical inspection system for checking the sequence of colored wires in electric cable is presented. The system is able to inspect cables with flat connectors differing in the type and number of wires. This variability is managed in an automatic way by means of a self-learning subsystem and does not require manual input from the operator or loading new data to the machine. The system is coupled to a connector crimping machine and once the model of a correct cable is learned, it can automatically inspect each cable assembled by the machine. The main contributions of this paper are: (i) the self-learning system; (ii) a robust segmentation algorithm for extracting wires from images even if they are strongly bent and partially overlapped; (iii) a color recognition algorithm able to cope with highlights and different finishing of the wire insulation. We report the system evaluation over a period of several months during the actual production of large batches of different cables; tests demonstrated a high level of accuracy and the absence of false negatives, which is a key point in order to guarantee defect-free productions

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Holaaa!! Writin like u talk is kewl but kinda hard 4 NLP

    Get PDF
    We present work in progress aiming to build tools for the normalization of User-Generated Content (UGC). As we will see, the task requires the revisiting of the initial steps of NLP processing, since UGC (micro-blog, blog, and, generally, Web 2.0 user texts) presents a number of non-standard communicative and linguistic characteristics, and is in fact much closer to oral and colloquial language than to edited text. We present and characterize a corpus of UGC text in Spanish from three different sources: Twitter, consumer reviews and blogs. We motivate the need for UGC text normalization by analyzing the problems found when processing this type of text through a conventional language processing pipeline, particularly in the tasks of lemmatization and morphosyntactic tagging, and finally we propose a strategy for automatically normalizing UGC using a selector of correct forms on top of a pre-existing spell-checker.Postprint (published version

    Applications of Computer Vision Technologies of Automated Crack Detection and Quantification for the Inspection of Civil Infrastructure Systems

    Get PDF
    Many components of existing civil infrastructure systems, such as road pavement, bridges, and buildings, are suffered from rapid aging, which require enormous nation\u27s resources from federal and state agencies to inspect and maintain them. Crack is one of important material and structural defects, which must be inspected not only for good maintenance of civil infrastructure with a high quality of safety and serviceability, but also for the opportunity to provide early warning against failure. Conventional human visual inspection is still considered as the primary inspection method. However, it is well established that human visual inspection is subjective and often inaccurate. In order to improve current manual visual inspection for crack detection and evaluation of civil infrastructure, this study explores the application of computer vision techniques as a non-destructive evaluation and testing (NDE&T) method for automated crack detection and quantification for different civil infrastructures. In this study, computer vision-based algorithms were developed and evaluated to deal with different situations of field inspection that inspectors could face with in crack detection and quantification. The depth, the distance between camera and object, is a necessary extrinsic parameter that has to be measured to quantify crack size since other parameters, such as focal length, resolution, and camera sensor size are intrinsic, which are usually known by camera manufacturers. Thus, computer vision techniques were evaluated with different crack inspection applications with constant and variable depths. For the fixed-depth applications, computer vision techniques were applied to two field studies, including 1) automated crack detection and quantification for road pavement using the Laser Road Imaging System (LRIS), and 2) automated crack detection on bridge cables surfaces, using a cable inspection robot. For the various-depth applications, two field studies were conducted, including 3) automated crack recognition and width measurement of concrete bridges\u27 cracks using a high-magnification telescopic lens, and 4) automated crack quantification and depth estimation using wearable glasses with stereovision cameras. From the realistic field applications of computer vision techniques, a novel self-adaptive image-processing algorithm was developed using a series of morphological transformations to connect fragmented crack pixels in digital images. The crack-defragmentation algorithm was evaluated with road pavement images. The results showed that the accuracy of automated crack detection, associated with artificial neural network classifier, was significantly improved by reducing both false positive and false negative. Using up to six crack features, including area, length, orientation, texture, intensity, and wheel-path location, crack detection accuracy was evaluated to find the optimal sets of crack features. Lab and field test results of different inspection applications show that proposed compute vision-based crack detection and quantification algorithms can detect and quantify cracks from different structures\u27 surface and depth. Some guidelines of applying computer vision techniques are also suggested for each crack inspection application

    Automatic artifacts removal from epileptic EEG using a hybrid algorithm

    Get PDF
    Electroencephalogram (EEG) examination plays a very important role in the diagnosis of disorders related to epilepsy in clinic. However, epileptic EEG is often contaminated with lots of artifacts such as electrocardiogram (ECG), electromyogram (EMG) and electrooculogram (EOG). These artifacts confuse EEG interpretation, while rejecting EEG segments containing artifacts probably results in a substantial data loss and it is very time-consuming. The purpose of this study is to develop a novel algorithm for removing artifacts from epileptic EEG automatically. The collected multi-channel EEG data are decomposed into statistically independent components with Independent Component Analysis (ICA). Then temporal and spectral features of each independent component, including Hurst exponent, skewness, kurtosis, largest Lyapunov exponent and frequency-band energy extracted with wavelet packet decomposition, are calculated to quantify the characteristics of different artifact components. These features are imported into trained support vector machine to determine whether the independent components represent EEG activity or artifactual signals. Finally artifact-free EEGs are obtained by reconstructing the signal with artifact-free components. The method is evaluated with EEG recordings acquired from 15 epilepsy patients. Compared with previous work, the proposed method can remove artifacts such as baseline drift, ECG, EMG, EOG, and power frequency interference automatically and efficiently, while retaining important features for epilepsy diagnosis such as interictal spikes and ictal segments

    Vulnerability Assessment Enhancement for Middleware for Computing and Informatics

    Get PDF
    Security on Grid computing is often an afterthought. However assessing security of middleware systems is of the utmost importance because they manage critical resources owned by different organizations. To fulfill this objective we use First Principles Vulnerability Assessment (FPVA), an innovative analystic-centric (manual) methodology that goes beyond current automated vulnerability tools. FPVA involves several stages for characterizing the analyzed system and its components. Based on the evaluation of several middleware systems, we have found that there is a gap between the initial and the last stages of FPVA, which is filled with the security practitioner expertise. We claim that this expertise is likely to be systematically codified in order to be able to automatically indicate which, and why, components should be assessed. In this paper we introduce key elements of our approach: Vulnerability graphs, Vulnerability Graph Analyzer, and a Knowledge Base of security configurations
    • …
    corecore