11,352 research outputs found

    The Structure and Function of the Retina in Multiple Sclerosis

    Get PDF
    Background: Multiple sclerosis (MS) is a complex heterogenous autoimmune inflammatory disease with a prolonged and variable time course. The visual system is frequently implicated, either as the presenting symptom, or, with advancement of the disease. This has been documented in the literature with changes in visual acuity (VA) that are accompanied by functional changes in the optic nerve, measured with the visual evoked potential (VEP) and possible retrograde degeneration involving the retinal ganglion cells in the retina, measured with the pattern reversal electroretinogram (PERG). However, inflammatory episodes may be clinical or subclinical in nature and may go unrecognised. Originating from the same embryological origins, the effect of inflammation in MS on the on the retina is less well known. The research hypothesis was that there is a measurable difference in the function of retinal cells in patients with newly diagnosed multiple sclerosis, suggestive of inflammatory retinopathy compared to healthy controls. The overall aim was to investigate any differences in the electrophysiological function of the visual pathway of patients newly diagnosed with MS compared to healthy controls. Methods: The visual system is explored with clinical (VA), electrophysiology (VEP and electroretinography (ERG – pattern and flash) and structural (OCT) measures, in patients presenting with symptoms suggestive of MS to a specialist service. This prospective case control study investigates the visual pathway at the earliest stage of the disease to look for differences in structure and function between patients and healthy volunteers that might serve as a biomarker in the future. Results: There were a number of variables that were significantly different between the two groups, logistic regression analysis found that VA (p 0.038) and VEP P100 peak-time (p 0.014) from the right eye as significant. Dividing the participants by prolongation of the VEP P100 peak-time as defined in clinical practice, found a number of ERG amplitude variables as well as VA that were consistently different between the groups regardless of symptoms. Conclusion: The study confirms optic nerve involvement in MS with VEP and VA abnormalities consistent with the literature in this cohort. Additionally, VA and some ERG amplitude variables were significantly reduced in participants with MS, when grouped according to VEP P100 peak-time, suggesting inner and outer retinal changes. Further work would be required to confirm these findings. No OCT structural changes were found in any of the analysis that included the macula thickness, ganglion cell layer or retinal nerve fibre layer. Keywords: multiple sclerosis (MS), visual evoked potential (VEP), pattern electroretinogram (PERG), electroretinogram (ERG), optical coherence tomography (OCT

    The link between SARS-CoV-2 related microglial reactivity and astrocyte pathology in the inferior olivary nucleus

    Get PDF
    The pathological involvement of the central nervous system in SARS-CoV2 (COVID-19) patients is established. The burden of pathology is most pronounced in the brain stem including the medulla oblongata. Hypoxic/ischemic damage is the most frequent neuropathologic abnormality. Other neuropathologic features include neuronophagia, microglial nodules, and hallmarks of neurodegenerative diseases: astrogliosis and microglial reactivity. It is still unknown if these pathologies are secondary to hypoxia versus a combination of inflammatory response combined with hypoxia. It is also unknown how astrocytes react to neuroinflammation in COVID-19, especially considering evidence supporting the neurotoxicity of certain astrocytic phenotypes. This study aims to define the link between astrocytic and microglial pathology in COVID-19 victims in the inferior olivary nucleus, which is one of the most severely affected brain regions in COVID-19, and establish whether COVID-19 pathology is driven by hypoxic damage. Here, we conducted neuropathologic assessments and multiplex-immunofluorescence studies on the medulla oblongata of 18 COVID-19, 10 pre-pandemic patients who died of acute respiratory distress syndrome (ARDS), and 7–8 control patients with no ARDS or COVID-19. The comparison of ARDS and COVID-19 allows us to identify whether the pathology in COVID-19 can be explained by hypoxia alone, which is common to both conditions. Our results showed increased olivary astrogliosis in ARDS and COVID-19. However, microglial density and microglial reactivity were increased only in COVID-19, in a region-specific manner. Also, olivary hilar astrocytes increased YKL-40 (CHI3L1) in COVID-19, but to a lesser extent than ARDS astrocytes. COVID-19 astrocytes also showed lower levels of Aquaporin-4 (AQP4), and Metallothionein-3 in subsets of COVID-19 brain regions. Cluster analysis on immunohistochemical attributes of astrocytes and microglia identified ARDS and COVID-19 clusters with correlations to clinical history and disease course. Our results indicate that olivary glial pathology and neuroinflammation in the COVID-19 cannot be explained solely by hypoxia and suggest that failure of astrocytes to upregulate the anti-inflammatory YKL-40 may contribute to the neuroinflammation. Notwithstanding the limitations of retrospective studies in establishing causality, our experimental design cannot adequately control for factors external to our design. Perturbative studies are needed to confirm the role of the above-described astrocytic phenotypes in neuroinflammation

    Audio-visual multi-modality driven hybrid feature learning model for crowd analysis and classification

    Get PDF
    The high pace emergence in advanced software systems, low-cost hardware and decentralized cloud computing technologies have broadened the horizon for vision-based surveillance, monitoring and control. However, complex and inferior feature learning over visual artefacts or video streams, especially under extreme conditions confine majority of the at-hand vision-based crowd analysis and classification systems. Retrieving event-sensitive or crowd-type sensitive spatio-temporal features for the different crowd types under extreme conditions is a highly complex task. Consequently, it results in lower accuracy and hence low reliability that confines existing methods for real-time crowd analysis. Despite numerous efforts in vision-based approaches, the lack of acoustic cues often creates ambiguity in crowd classification. On the other hand, the strategic amalgamation of audio-visual features can enable accurate and reliable crowd analysis and classification. Considering it as motivation, in this research a novel audio-visual multi-modality driven hybrid feature learning model is developed for crowd analysis and classification. In this work, a hybrid feature extraction model was applied to extract deep spatio-temporal features by using Gray-Level Co-occurrence Metrics (GLCM) and AlexNet transferrable learning model. Once extracting the different GLCM features and AlexNet deep features, horizontal concatenation was done to fuse the different feature sets. Similarly, for acoustic feature extraction, the audio samples (from the input video) were processed for static (fixed size) sampling, pre-emphasis, block framing and Hann windowing, followed by acoustic feature extraction like GTCC, GTCC-Delta, GTCC-Delta-Delta, MFCC, Spectral Entropy, Spectral Flux, Spectral Slope and Harmonics to Noise Ratio (HNR). Finally, the extracted audio-visual features were fused to yield a composite multi-modal feature set, which is processed for classification using the random forest ensemble classifier. The multi-class classification yields a crowd-classification accurac12529y of (98.26%), precision (98.89%), sensitivity (94.82%), specificity (95.57%), and F-Measure of 98.84%. The robustness of the proposed multi-modality-based crowd analysis model confirms its suitability towards real-world crowd detection and classification tasks

    Machine learning in solar physics

    Full text link
    The application of machine learning in solar physics has the potential to greatly enhance our understanding of the complex processes that take place in the atmosphere of the Sun. By using techniques such as deep learning, we are now in the position to analyze large amounts of data from solar observations and identify patterns and trends that may not have been apparent using traditional methods. This can help us improve our understanding of explosive events like solar flares, which can have a strong effect on the Earth environment. Predicting hazardous events on Earth becomes crucial for our technological society. Machine learning can also improve our understanding of the inner workings of the sun itself by allowing us to go deeper into the data and to propose more complex models to explain them. Additionally, the use of machine learning can help to automate the analysis of solar data, reducing the need for manual labor and increasing the efficiency of research in this field.Comment: 100 pages, 13 figures, 286 references, accepted for publication as a Living Review in Solar Physics (LRSP

    Slitless spectrophotometry with forward modelling: principles and application to atmospheric transmission measurement

    Full text link
    In the next decade, many optical surveys will aim to tackle the question of dark energy nature, measuring its equation of state parameter at the permil level. This requires trusting the photometric calibration of the survey with a precision never reached so far, controlling many sources of systematic uncertainties. The measurement of the on-site atmospheric transmission for each exposure, or on average for each season or for the full survey, can help reach the permil precision for magnitudes. This work aims at proving the ability to use slitless spectroscopy for standard star spectrophotometry and its use to monitor on-site atmospheric transmission as needed, for example, by the Vera C. Rubin Observatory Legacy Survey of Space and Time supernova cosmology program. We fully deal with the case of a disperser in the filter wheel, which is the configuration chosen in the Rubin Auxiliary Telescope. The theoretical basis of slitless spectrophotometry is at the heart of our forward model approach to extract spectroscopic information from slitless data. We developed a publicly available software called Spectractor (https://github.com/LSSTDESC/Spectractor) that implements each ingredient of the model and finally performs a fit of a spectrogram model directly on image data to get the spectrum. We show on simulations that our model allows us to understand the structure of spectrophotometric exposures. We also demonstrate its use on real data, solving specific issues and illustrating how our procedure allows the improvement of the model describing the data. Finally, we discuss how this approach can be used to directly extract atmospheric transmission parameters from data and thus provide the base for on-site atmosphere monitoring. We show the efficiency of the procedure on simulations and test it on the limited data set available.Comment: 30 pages, 36 figures, submitted to Astronomy and Astrophysic

    Technology for Low Resolution Space Based RSO Detection and Characterisation

    Get PDF
    Space Situational Awareness (SSA) refers to all activities to detect, identify and track objects in Earth orbit. SSA is critical to all current and future space activities and protect space assets by providing access control, conjunction warnings, and monitoring status of active satellites. Currently SSA methods and infrastructure are not sufficient to account for the proliferations of space debris. In response to the need for better SSA there has been many different areas of research looking to improve SSA most of the requiring dedicated ground or space-based infrastructure. In this thesis, a novel approach for the characterisation of RSO’s (Resident Space Objects) from passive low-resolution space-based sensors is presented with all the background work performed to enable this novel method. Low resolution space-based sensors are common on current satellites, with many of these sensors being in space using them passively to detect RSO’s can greatly augment SSA with out expensive infrastructure or long lead times. One of the largest hurtles to overcome with research in the area has to do with the lack of publicly available labelled data to test and confirm results with. To overcome this hurtle a simulation software, ORBITALS, was created. To verify and validate the ORBITALS simulator it was compared with the Fast Auroral Imager images, which is one of the only publicly available low-resolution space-based images found with auxiliary data. During the development of the ORBITALS simulator it was found that the generation of these simulated images are computationally intensive when propagating the entire space catalog. To overcome this an upgrade of the currently used propagation method, Specialised General Perturbation Method 4th order (SGP4), was performed to allow the algorithm to run in parallel reducing the computational time required to propagate entire catalogs of RSO’s. From the results it was found that the standard facet model with a particle swarm optimisation performed the best estimating an RSO’s attitude with a 0.66 degree RMSE accuracy across a sequence, and ~1% MAPE accuracy for the optical properties. This accomplished this thesis goal of demonstrating the feasibility of low-resolution passive RSO characterisation from space-based platforms in a simulated environment

    Using machine learning to predict pathogenicity of genomic variants throughout the human genome

    Get PDF
    Geschätzt mehr als 6.000 Erkrankungen werden durch Veränderungen im Genom verursacht. Ursachen gibt es viele: Eine genomische Variante kann die Translation eines Proteins stoppen, die Genregulation stören oder das Spleißen der mRNA in eine andere Isoform begünstigen. All diese Prozesse müssen überprüft werden, um die zum beschriebenen Phänotyp passende Variante zu ermitteln. Eine Automatisierung dieses Prozesses sind Varianteneffektmodelle. Mittels maschinellem Lernen und Annotationen aus verschiedenen Quellen bewerten diese Modelle genomische Varianten hinsichtlich ihrer Pathogenität. Die Entwicklung eines Varianteneffektmodells erfordert eine Reihe von Schritten: Annotation der Trainingsdaten, Auswahl von Features, Training verschiedener Modelle und Selektion eines Modells. Hier präsentiere ich ein allgemeines Workflow dieses Prozesses. Dieses ermöglicht es den Prozess zu konfigurieren, Modellmerkmale zu bearbeiten, und verschiedene Annotationen zu testen. Der Workflow umfasst außerdem die Optimierung von Hyperparametern, Validierung und letztlich die Anwendung des Modells durch genomweites Berechnen von Varianten-Scores. Der Workflow wird in der Entwicklung von Combined Annotation Dependent Depletion (CADD), einem Varianteneffektmodell zur genomweiten Bewertung von SNVs und InDels, verwendet. Durch Etablierung des ersten Varianteneffektmodells für das humane Referenzgenome GRCh38 demonstriere ich die gewonnenen Möglichkeiten Annotationen aufzugreifen und neue Modelle zu trainieren. Außerdem zeige ich, wie Deep-Learning-Scores als Feature in einem CADD-Modell die Vorhersage von RNA-Spleißing verbessern. Außerdem werden Varianteneffektmodelle aufgrund eines neuen, auf Allelhäufigkeit basierten, Trainingsdatensatz entwickelt. Diese Ergebnisse zeigen, dass der entwickelte Workflow eine skalierbare und flexible Möglichkeit ist, um Varianteneffektmodelle zu entwickeln. Alle entstandenen Scores sind unter cadd.gs.washington.edu und cadd.bihealth.org frei verfügbar.More than 6,000 diseases are estimated to be caused by genomic variants. This can happen in many possible ways: a variant may stop the translation of a protein, interfere with gene regulation, or alter splicing of the transcribed mRNA into an unwanted isoform. It is necessary to investigate all of these processes in order to evaluate which variant may be causal for the deleterious phenotype. A great help in this regard are variant effect scores. Implemented as machine learning classifiers, they integrate annotations from different resources to rank genomic variants in terms of pathogenicity. Developing a variant effect score requires multiple steps: annotation of the training data, feature selection, model training, benchmarking, and finally deployment for the model's application. Here, I present a generalized workflow of this process. It makes it simple to configure how information is converted into model features, enabling the rapid exploration of different annotations. The workflow further implements hyperparameter optimization, model validation and ultimately deployment of a selected model via genome-wide scoring of genomic variants. The workflow is applied to train Combined Annotation Dependent Depletion (CADD), a variant effect model that is scoring SNVs and InDels genome-wide. I show that the workflow can be quickly adapted to novel annotations by porting CADD to the genome reference GRCh38. Further, I demonstrate the integration of deep-neural network scores as features into a new CADD model, improving the annotation of RNA splicing events. Finally, I apply the workflow to train multiple variant effect models from training data that is based on variants selected by allele frequency. In conclusion, the developed workflow presents a flexible and scalable method to train variant effect scores. All software and developed scores are freely available from cadd.gs.washington.edu and cadd.bihealth.org

    Complexity Science in Human Change

    Get PDF
    This reprint encompasses fourteen contributions that offer avenues towards a better understanding of complex systems in human behavior. The phenomena studied here are generally pattern formation processes that originate in social interaction and psychotherapy. Several accounts are also given of the coordination in body movements and in physiological, neuronal and linguistic processes. A common denominator of such pattern formation is that complexity and entropy of the respective systems become reduced spontaneously, which is the hallmark of self-organization. The various methodological approaches of how to model such processes are presented in some detail. Results from the various methods are systematically compared and discussed. Among these approaches are algorithms for the quantification of synchrony by cross-correlational statistics, surrogate control procedures, recurrence mapping and network models.This volume offers an informative and sophisticated resource for scholars of human change, and as well for students at advanced levels, from graduate to post-doctoral. The reprint is multidisciplinary in nature, binding together the fields of medicine, psychology, physics, and neuroscience

    Point Cloud Registration for LiDAR and Photogrammetric Data: a Critical Synthesis and Performance Analysis on Classic and Deep Learning Algorithms

    Full text link
    Recent advances in computer vision and deep learning have shown promising performance in estimating rigid/similarity transformation between unregistered point clouds of complex objects and scenes. However, their performances are mostly evaluated using a limited number of datasets from a single sensor (e.g. Kinect or RealSense cameras), lacking a comprehensive overview of their applicability in photogrammetric 3D mapping scenarios. In this work, we provide a comprehensive review of the state-of-the-art (SOTA) point cloud registration methods, where we analyze and evaluate these methods using a diverse set of point cloud data from indoor to satellite sources. The quantitative analysis allows for exploring the strengths, applicability, challenges, and future trends of these methods. In contrast to existing analysis works that introduce point cloud registration as a holistic process, our experimental analysis is based on its inherent two-step process to better comprehend these approaches including feature/keypoint-based initial coarse registration and dense fine registration through cloud-to-cloud (C2C) optimization. More than ten methods, including classic hand-crafted, deep-learning-based feature correspondence, and robust C2C methods were tested. We observed that the success rate of most of the algorithms are fewer than 40% over the datasets we tested and there are still are large margin of improvement upon existing algorithms concerning 3D sparse corresopondence search, and the ability to register point clouds with complex geometry and occlusions. With the evaluated statistics on three datasets, we conclude the best-performing methods for each step and provide our recommendations, and outlook future efforts.Comment: 7 figure

    Vortex laser development employing an interferometric output coupler

    Get PDF
    This thesis focuses on the design and implementation of new laser cavity designs for vortex mode generation. The vortex generation method is integrated into both solid-state laser cavities and systems using doped fibre gain mediums. Optical vortex laser beams have attracted a lot of attention due to their proposed applications in a wide range of industries from communication to particle manipulation to microscopy to material processing. More recently, vortex generation directly from the laser has attracted research due to the potential for higher purity, higher power and a compact system. In this work a modified Sagnac interferometer, dubbed the vortex output coupler (VOC), is integrated as the output coupler of a laser cavity and used to convert the fundamental Gaussian intracavity mode into a first order Laguerre Gaussian output. The VOC was first implemented into various solid-state cavities, with a Vanadate gain medium operating at 1064 nm, where vortex generation was successfully demonstrated. A record, from-the-source generated, vortex power of 31.3 W was achieved, with a laser slope efficiency of 62.5%. The mode purity was 95.2% and the M2 = 2.25. The handedness of the generated vortex was pure and switchable during operation. First order Hermite Gaussian modes with the same power were also demonstrated. The VOC was also shown to function in a pulsed cavity without any detrimental effects. It was found that the VOC has mode filtering properties, which helped maintain the a fundamental Gaussian in the cavity, despite mode mismatch between the pump beam and the fundamental Gaussian mode in the cavity. Fibre laser systems have the advantage of being compact, alignment insensitive and maintaining a close resemblance to the fundamental Gaussian mode through the use of a single mode fibre. To demonstrate the implementation versatility of the VOC and capitalise on its power scaling potential, the VOC was integrated into a non-polarisation maintaining fibre laser system as a bulk output coupler. An Ytterbium-doped gain fibre was used, operating at 1064 nm, which allowed for the same optics as in previous work to be used for the VOC. 5.08 W of vortex output power was achieved from the fibre laser system, with a mode purity and quality of 96.1% and M2 (X/Y) = 2.03/2.22, respectively. This system was also used as a first order vortex source for higher order vortex generation, using a spiral phase plate. Vortices with orbital angular momentum values of l = +2 and +3 were generated from the first order vortex (l = +1) input using spiral phase plates, which imparted +1 and +2 orbital angular momentum (helical phase ramps of 2Ď€ and 4Ď€ respectively). The VOC is made up of a beamsplitter and three turning mirrors, which are all high power damage threshold components. By choosing appropriate optical coatings for these components, considering wavelength and polarisation, the VOC can be implemented across the output spectrum making it incredibly versatile. The VOC is shown to function in a pulsed laser system, with a vortex pulse with duration 20 ns and energy 303 ÎĽJ shown in this work. The output mode can be switched between left and right vortex handedness and also between the first order Hermite Gaussian modes, all during operation. This pulsed operation and output mode versatility make it very interesting for material surface processing, particle levitation and manipulation, free-space communication and broadband, or ultrashort pulse, vortex generation. A VOC enhanced vortex laser can also be used as a high power and purity first order Hermite-Gaussian or Laguerre-Gaussain source for further conversion to higher order modes using other methods.Open Acces
    • …
    corecore