1,198 research outputs found

    Analysis of time-varying synchronization of EEG during sentences identification

    Get PDF
    The study of the synchronization of EEG signals can help us to understand the underlying cognitive processes and detect the learning deficiencies since the oscillatory states in the EEG reveal the rhythmic synchronous activity in large networks of neurons. As the changes of the physiological states and the relative environment exist when cognitive and information processing take place in different brain regions at different times, the practical EEGs therefore turn out to be extremely non-stationary processes. To investigate how these distributed brain regions are linked together and the information is exchanged with time, this paper proposes a modern time-frequency coherent analysis method that employs an alternative way for quantifying synchronization with both temporal and spatial resolution. Wavelet coherent spectrum is defined such that the degree of synchronization and information flow between different brain regions can be described. Several real EEG data are analysed under the cognitive tasks of sentences identification in both English and Chinese. The time-varying synchronization between the brain regions involved in the processing of sentences exhibited that a common neural network is activated by both English and Chinese sentences. The results of the presented method are helpful for studying English and Chinese learning for Chinese students.published_or_final_versio

    The potential application of artificial intelligence for diagnosis and management of glaucoma in adults

    Get PDF
    BACKGROUND: Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA: This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT: There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY: Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS: AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH: There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results

    Multiple Imputation Ensembles (MIE) for dealing with missing data

    Get PDF
    Missing data is a significant issue in many real-world datasets, yet there are no robust methods for dealing with it appropriately. In this paper, we propose a robust approach to dealing with missing data in classification problems: Multiple Imputation Ensembles (MIE). Our method integrates two approaches: multiple imputation and ensemble methods and compares two types of ensembles: bagging and stacking. We also propose a robust experimental set-up using 20 benchmark datasets from the UCI machine learning repository. For each dataset, we introduce increasing amounts of data Missing Completely at Random. Firstly, we use a number of single/multiple imputation methods to recover the missing values and then ensemble a number of different classifiers built on the imputed data. We assess the quality of the imputation by using dissimilarity measures. We also evaluate the MIE performance by comparing classification accuracy on the complete and imputed data. Furthermore, we use the accuracy of simple imputation as a benchmark for comparison. We find that our proposed approach combining multiple imputation with ensemble techniques outperform others, particularly as missing data increases

    Data-Driven Optimization of Public Transit Schedule

    Full text link
    Bus transit systems are the backbone of public transportation in the United States. An important indicator of the quality of service in such infrastructures is on-time performance at stops, with published transit schedules playing an integral role governing the level of success of the service. However there are relatively few optimization architectures leveraging stochastic search that focus on optimizing bus timetables with the objective of maximizing probability of bus arrivals at timepoints with delays within desired on-time ranges. In addition to this, there is a lack of substantial research considering monthly and seasonal variations of delay patterns integrated with such optimization strategies. To address these,this paper makes the following contributions to the corpus of studies on transit on-time performance optimization: (a) an unsupervised clustering mechanism is presented which groups months with similar seasonal delay patterns, (b) the problem is formulated as a single-objective optimization task and a greedy algorithm, a genetic algorithm (GA) as well as a particle swarm optimization (PSO) algorithm are employed to solve it, (c) a detailed discussion on empirical results comparing the algorithms are provided and sensitivity analysis on hyper-parameters of the heuristics are presented along with execution times, which will help practitioners looking at similar problems. The analyses conducted are insightful in the local context of improving public transit scheduling in the Nashville metro region as well as informative from a global perspective as an elaborate case study which builds upon the growing corpus of empirical studies using nature-inspired approaches to transit schedule optimization.Comment: 20 pages, 6 figures, 2 table

    Two chemically similar stellar overdensities on opposite sides of the plane of the Galaxy

    Get PDF
    Our Galaxy is thought to have undergone an active evolutionary history dominated by star formation, the accretion of cold gas, and, in particular, mergers up to 10 gigayear ago. The stellar halo reveals rich fossil evidence of these interactions in the form of stellar streams, substructures, and chemically distinct stellar components. The impact of dwarf galaxy mergers on the content and morphology of the Galactic disk is still being explored. Recent studies have identified kinematically distinct stellar substructures and moving groups, which may have extragalactic origin. However, there is mounting evidence that stellar overdensities at the outer disk/halo interface could have been caused by the interaction of a dwarf galaxy with the disk. Here we report detailed spectroscopic analysis of 14 stars drawn from two stellar overdensities, each lying about 5 kiloparsecs above and below the Galactic plane - locations suggestive of association with the stellar halo. However, we find that the chemical compositions of these stars are almost identical, both within and between these groups, and closely match the abundance patterns of the Milky Way disk stars. This study hence provides compelling evidence that these stars originate from the disk and the overdensities they are part of were created by tidal interactions of the disk with passing or merging dwarf galaxies.Comment: accepted for publication in Natur

    Accurate Genome Relative Abundance Estimation Based on Shotgun Metagenomic Reads

    Get PDF
    Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy) by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy). GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data- sets) in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based) even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes

    Preparation of Large Monodisperse Vesicles

    Get PDF
    Preparation of monodisperse vesicles is important both for research purposes and for practical applications. While the extrusion of vesicles through small pores (∼100 nm in diameter) results in relatively uniform populations of vesicles, extrusion to larger sizes results in very heterogeneous populations of vesicles. Here we report a simple method for preparing large monodisperse multilamellar vesicles through a combination of extrusion and large-pore dialysis. For example, extrusion of polydisperse vesicles through 5-µm-diameter pores eliminates vesicles larger than 5 µm in diameter. Dialysis of extruded vesicles against 3-µm-pore-size polycarbonate membranes eliminates vesicles smaller than 3 µm in diameter, leaving behind a population of monodisperse vesicles with a mean diameter of ∼4 µm. The simplicity of this method makes it an effective tool for laboratory vesicle preparation with potential applications in preparing large monodisperse liposomes for drug delivery

    All-sky visible and near infrared space astrometry

    Get PDF
    The era of all-sky space astrometry began with the Hipparcos mission in 1989 and provided the first very accurate catalogue of apparent magnitudes, positions, parallaxes and proper motions of 120 000 bright stars at the milliarcsec (or milliarcsec per year) accuracy level. Hipparcos has now been superseded by the results of the Gaia mission. The second Gaia data release contained astrometric data for almost 1.7 billion sources with tens of microarcsec (or microarcsec per year) accuracy in a vast volume of the Milky Way and future data releases will further improve on this. Gaia has just completed its nominal 5-year mission (July 2019), but is expected to continue in operations for an extended period of an additional 5 years through to mid 2024. Its final catalogue to be released ∼ 2027, will provide astrometry for ∼ 2 billion sources, with astrometric precisions reaching 10 microarcsec. Why is accurate astrometry so important? The answer is that it provides fundamental data which underpin much of modern observational astronomy as will be detailed in this White Paper. All-sky visible and Near-InfraRed (NIR) astrometry with a wavelength cutoff in the K-band is not just focused on a single or small number of key science cases. Instead, it is extremely broad, answering key science questions in nearly every branch of astronomy while also providing a dense and accurate visible-NIR reference frame needed for future astronomy facilities
    corecore