294 research outputs found

    Live Tool Condition Monitoring of SiAlON Inserted Tools whilst Milling Nickel-Based Super Alloys

    Get PDF
    Cutting tools with ceramic inserts are often used in the process of machining many types of super alloys, mainly due to their high strength and thermal resistance. Nevertheless, during the cutting process, the plastic flow wear generated in these inserts enhances and propagates cracks due to high temperature and high mechanical stress. This leads to a very variable failure of the cutting tool. Furthermore, in high-speed rough machining of nickel-based super alloys, such as Inconel 718 and Waspalloy, it is recommended to avoid the use of any type of coolant. This in turn, enables the clear visualization of cutting sparks, which in these machining tasks are quite distinctive. The present doctoral thesis attempts to set the basis of a potential Tool Condition Monitoring (TCM) system that could use vison-based sensing to calculate the amount of tool wear. This TCM system would work around the research hypothesis that states that a relationship exists between the continuous wear that ceramic SiAlON (solid solutions based on the Si3N4 structure) inserts experience during a high-speed machining process, and the evolution of sparks created during the same process. A successful TCM system such as this could be implemented at an industrial level to aid in providing a live status of the cutting tool’s condition, potentially improving the effectiveness of these machining tasks, whilst preventing tool failure and workpiece damage. During this research, sparks were analyzed through various visual methods in three main experiments. Four studies were developed using the mentioned experiments to support and create a final predictive approach to the TCM system. These studies are described in each thesis chapter and they include a wear assessment of SiAlON ceramics, an analysis of the optimal image acquisition systems and parameters appropriate for this research, a study of the research hypothesis, and finally, an approach to tool wear prediction using Neural Networks (NN). To carry out some of these studies, an overall methodology was structured to perform experiments and to process spark evolution data, as image processing algorithms were built to extract spark area and intensity. Towards the end of this thesis, these spark features were used, along with measured values of tool wear, namely notch, flank and crater wear, to build a Neural Network for tool wear prediction

    EEG-based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications.

    Full text link
    Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research

    Pattern Recognition

    Get PDF
    A wealth of advanced pattern recognition algorithms are emerging from the interdiscipline between technologies of effective visual features and the human-brain cognition process. Effective visual features are made possible through the rapid developments in appropriate sensor equipments, novel filter designs, and viable information processing architectures. While the understanding of human-brain cognition process broadens the way in which the computer can perform pattern recognition tasks. The present book is intended to collect representative researches around the globe focusing on low-level vision, filter design, features and image descriptors, data mining and analysis, and biologically inspired algorithms. The 27 chapters coved in this book disclose recent advances and new ideas in promoting the techniques, technology and applications of pattern recognition

    Summer 2007 Research Symposium Abstract Book

    Get PDF
    Summer 2007 volume of abstracts for science research projects conducted by Trinity College students

    Artificial intelligence within the interplay between natural and artificial computation:Advances in data science, trends and applications

    Get PDF
    Artificial intelligence and all its supporting tools, e.g. machine and deep learning in computational intelligence-based systems, are rebuilding our society (economy, education, life-style, etc.) and promising a new era for the social welfare state. In this paper we summarize recent advances in data science and artificial intelligence within the interplay between natural and artificial computation. A review of recent works published in the latter field and the state the art are summarized in a comprehensive and self-contained way to provide a baseline framework for the international community in artificial intelligence. Moreover, this paper aims to provide a complete analysis and some relevant discussions of the current trends and insights within several theoretical and application fields covered in the essay, from theoretical models in artificial intelligence and machine learning to the most prospective applications in robotics, neuroscience, brain computer interfaces, medicine and society, in general.BMS - Pfizer(U01 AG024904). Spanish Ministry of Science, projects: TIN2017-85827-P, RTI2018-098913-B-I00, PSI2015-65848-R, PGC2018-098813-B-C31, PGC2018-098813-B-C32, RTI2018-101114-B-I, TIN2017-90135-R, RTI2018-098743-B-I00 and RTI2018-094645-B-I00; the FPU program (FPU15/06512, FPU17/04154) and Juan de la Cierva (FJCI-2017–33022). Autonomous Government of Andalusia (Spain) projects: UMA18-FEDERJA-084. Consellería de Cultura, Educación e Ordenación Universitaria of Galicia: ED431C2017/12, accreditation 2016–2019, ED431G/08, ED431C2018/29, Comunidad de Madrid, Y2018/EMT-5062 and grant ED431F2018/02. PPMI – a public – private partnership – is funded by The Michael J. Fox Foundation for Parkinson’s Research and funding partners, including Abbott, Biogen Idec, F. Hoffman-La Roche Ltd., GE Healthcare, Genentech and Pfizer Inc

    A Better Looking Brain: Image Pre-Processing Approaches for fMRI Data

    Get PDF
    Researchers in the field of functional neuroimaging have faced a long standing problem in pre-processing low spatial resolution data without losing meaningful details within. Commonly, the brain function is recorded by a technique known as echo-planar imaging that represents the measure of blood flow (BOLD signal) through a particular location in the brain as an array of intensity values changing over time. This approach to record a movie of blood flow in the brain is known as fMRI. The neural activity is then studied from the temporal correlation patterns existing within the fMRI time series. However, the resulting images are noisy and contain low spatial detail, thus making it imperative to pre-process them appropriately to derive meaningful activation patterns. Two of the several standard preprocessing steps employed just before the analysis stage are denoising and normalization. Fundamentally, it is difficult to perfectly remove noise from an image without making assumptions about signal and noise distributions. A convenient and commonly used alternative is to smooth the image with a Gaussian filter, but this method suffers from various obvious drawbacks, primarily loss of spatial detail. A greater challenge arises when we attempt to derive average activation patterns from fMRI images acquired from a group of individuals. The brain of one individual differs from others in a structural sense as well as in a functional sense. Commonly, the inter-individual differences in anatomical structures are compensated for by co-registering each subject\u27s data to a common normalization space, known as spatial normalization. However, there are no existing methods to compensate for the differences in functional organization of the brain. This work presents first steps towards data-driven robust algorithms for fMRI image denoising and multi-subject image normalization by utilizing inherent information within fMRI data. In addition, a new validation approach based on spatial shape of the activation regions is presented to quantify the effects of preprocessing and also as a tool to record the differences in activation patterns between individual subjects or within two groups such as healthy controls and patients with mental illness. Qualititative and quantitative results of the proposed framework compare favorably against existing and widely used model-driven approaches such as Gaussian smoothing and structure-based spatial normalization. This work is intended to provide neuroscience researchers tools to derive more meaningful activation patterns to accurately identify imaging biomarkers for various neurodevelopmental diseases and also maximize the specificity of a diagnosis

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Visual Representation Learning with Minimal Supervision

    Get PDF
    Computer vision intends to provide the human abilities of understanding and interpreting the visual surroundings to computers. An essential element to comprehend the environment is to extract relevant information from complex visual data so that the desired task can be solved. For instance, to distinguish cats from dogs the feature 'body shape' is more relevant than 'eye color' or the 'amount of legs'. In traditional computer vision it is conventional to develop handcrafted functions that extract specific low-level features such as edges from visual data. However, in order to solve a particular task satisfactorily we require a combination of several features. Thus, the approach of traditional computer vision has the disadvantage that whenever a new task is addressed, a developer needs to manually specify all the features the computer should look for. For that reason, recent works have primarily focused on developing new algorithms that teach the computer to autonomously detect relevant and task-specific features. Deep learning has been particularly successful for that matter. In deep learning, artificial neural networks automatically learn to extract informative features directly from visual data. The majority of developed deep learning strategies require a dataset with annotations which indicate the solution of the desired task. The main bottleneck is that creating such a dataset is very tedious and time-intensive considering that every sample needs to be annotated manually. This thesis presents new techniques that attempt to keep the amount of human supervision to a minimum while still reaching satisfactory performances on various visual understanding tasks. In particular, this thesis focuses on self-supervised learning algorithms that train a neural network on a surrogate task where no human supervision is required. We create an artificial supervisory signal by breaking the order of visual patterns and asking the network to recover the original structure. Besides demonstrating the abilities of our model on common computer vision tasks such as action recognition, we additionally apply our model to biomedical scenarios. Many research projects in medicine involve profuse manual processes that extend the duration of developing successful treatments. Taking the example of analyzing the motor function of neurologically impaired patients we show that our self-supervised method can help to automate tedious, visually based processes in medical research. In order to perform a detailed analysis of motor behavior and, thus, provide a suitable treatment, it is important to discover and identify the negatively affected movements. Therefore, we propose a magnification tool that can detect and enhance subtle changes in motor function including motor behavior differences across individuals. In this way, our automatic diagnostic system does not only analyze apparent behavior but also facilitates the perception and discovery of impaired movements. Learning a feature representation without requiring annotations significantly reduces human supervision. However, using annotated dataset leads generally to better performances in contrast to self-supervised learning methods. Hence, we additionally examine semi-supervised approaches which efficiently combine few annotated samples with large unlabeled datasets. Consequently, semi-supervised learning represents a good trade-off between annotation time and accuracy

    Multiple sports concussion in male rugby players: a neurocognitive and neuroimaging study

    Get PDF
    Abstract Objective: Following a sport related concussion (SRC) visible symptoms generally dissipate in 7-10 days post-injury. However, little is known about the cumulative effects of SRCs both in terms of structural damage to the white matter of the brain and neurocognitive performance. To address this issue, the relationship between the number of SRCs (frequency), axonal white matter (WM) damage and neurocognitive performance was examined. There were three predictions. First, increases in SRC frequency will be associated with decreases in performance on neurocognitive tests. Second, the frequency of SRC will be associated with axonal injury measured three WM tracts: the corpus callosum, the fronto-occipital fasciculus and the inferior longitudinal fasciculus. Third, less accurate and slower performance on a response inhibition task (STOP-IT) will be associated with greater axonal injury. Methods: A cross-sectional correlational design was utilised. Participants were rugby players with a history of SRC, rugby players with no history of SRC and control athletes (N=40) who completed a neurocognitive test battery and had a DTI brain scan. The neurocognitive battery consisted of the following standardised tests: Speed and Capacity of Language Processing Test, CogState Electronic Battery, Stroop Colour and Word Test, Controlled Oral Word Association Test, the Trail Making Test and the experimental test STOP-IT Electronic Test. White matter axonal injury was measured by DTI using fractional anisotropy (FA) and mean diffusivity (MD) metrics. The DTI data was processed using FSL to extract FA and MD DTI metrics in three a-priori regions of interest. Results: Spearman’s correlation analyses did not find significant associations between SRC frequency and neurocognitive performance on the FAS (rs=0.053, 95% CI [-0.27, 0.36]), TMT-A (rs=0.058, 95% CI [-0.26, 0.37]), TMT-B (rs= -0.046, 95% CI [-0.27, 0.36]) and the Stroop Interference (rs= -0.25, 95% CI [-0.07, 0.52]). Similarly, no significant Spearman’s correlations were found between SRC frequency and the computerised neurocognitive tests STOP-IT-SSRT (rs= -0.04, 95% CI [-0.28, 0.35])), STOP-IT–Accuracy (rs= -0.05, 95% CI [-0.27, 0.36]), CogState Detection subtest (rs= -0.15, 95% CI [-0.17, 0.44]), CogState Identification subtest (rs= -0.065, 95% CI [-0.26, 0.37]), CogState One card learning subtest (rs= 0.24, 95% CI [-0.08, 0.52]) or the CogState One back task subtest (rs= 0.06, 95% CI [-0.26, 0.37]). In terms of the DTI data there were no significant associations between SRC frequency and axonal injury measured by FA values in the CC (rs= 0.005, 95% CI [-0.31, 0.32]), ILF (rs= 0.028, 95% CI [-0.29, 0.34]) or FOF (rs= -0.022, 95% CI [-0.30, 0.33]). The same was pattern was found for MD values in the CC (rs= 0.081, 95% CI [-0.24, 0.39]), ILF (rs= -0.16, 95% CI [-0.16, 0.45]) or FOF (rs= -0.15, 95% CI [-0.17, 0.44]) Finally, there were no significant Spearman’s correlations between axonal injury FA values and the STOP-IT SSRT in any of the ROIs: CC (rs= 0.005, 95% CI [-0.31, 0.32]), ILF (rs= 0.028, 95% CI [-0.29, 0.34]) or FOF (rs= -0.022, 95% CI [-0.30, 0.33]). Equally, there were no significant correlations between MD values STOP-IT SSRT in the CC (rs= -0.028, 95% CI [-0.29, 0.34]), ILF (rs= -0.16, 95% CI [-0.16, 0.45]) or FOF (rs= -0.15, 95% CI [-0.17, 0.44]). Likewise, there were no significant Spearman’s correlations between accuracy on the STOP-IT and FA values and in any of the ROIs: CC (rs= 0.19, 95% CI [-0.13, 0.48]), ILF (rs= -0.045, 95% CI [-0.27, 0.35]) and FOF (rs= -0.032, 95% CI [-0.29, 0.34]), or MD values in the CC (rs= -0.11, 95% CI [-0.21, 0.41]), ILF (rs= 0.017, 95% CI [-0.30, 0.33]) or FOF (rs= 0.082, 95% CI [-0.24, 0.39]). This study did not find support for the hypothesis that cumulative SRCs are associated with poorer performance on neurocognitive tests or with axonal injury as measured by FA and MD DTI metrics. Conclusion: The null findings suggest that there are no cumulative effects of SRCs. The current findings are inconsistent with previous cross-sectional research that indicates that there are long-term changes to diffusivity measures present after single SRCs as well as cumulative effects in contact sport athletes. Likewise they are at odds with evidence suggesting that after three SRCs neurocognitive performance can be affected. The study needs to be extended to include a larger sample to ensure the results are not due to low statistical power
    corecore