8,470 research outputs found

    Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach.

    Get PDF
    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain-computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a "containerized" approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data "Levels," each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org)

    Understanding Health and Disease with Multidimensional Single-Cell Methods

    Full text link
    Current efforts in the biomedical sciences and related interdisciplinary fields are focused on gaining a molecular understanding of health and disease, which is a problem of daunting complexity that spans many orders of magnitude in characteristic length scales, from small molecules that regulate cell function to cell ensembles that form tissues and organs working together as an organism. In order to uncover the molecular nature of the emergent properties of a cell, it is essential to measure multiple cell components simultaneously in the same cell. In turn, cell heterogeneity requires multiple cells to be measured in order to understand health and disease in the organism. This review summarizes current efforts towards a data-driven framework that leverages single-cell technologies to build robust signatures of healthy and diseased phenotypes. While some approaches focus on multicolor flow cytometry data and other methods are designed to analyze high-content image-based screens, we emphasize the so-called Supercell/SVM paradigm (recently developed by the authors of this review and collaborators) as a unified framework that captures mesoscopic-scale emergence to build reliable phenotypes. Beyond their specific contributions to basic and translational biomedical research, these efforts illustrate, from a larger perspective, the powerful synergy that might be achieved from bringing together methods and ideas from statistical physics, data mining, and mathematics to solve the most pressing problems currently facing the life sciences.Comment: 25 pages, 7 figures; revised version with minor changes. To appear in J. Phys.: Cond. Mat

    Recognition of Human Periodic Movements From Unstructured Information Using A Motion-based Frequency Domain Approach

    Get PDF
    Feature-based motion cues play an important role in biological visual perception. We present a motion-based frequency-domain scheme for human periodic motion recognition. As a baseline study of feature based recognition we use unstructured feature-point kinematic data obtained directly from a marker-based optical motion capture (MoCap) system, rather than accommodate bootstrapping from the low-level image processing of feature detection. Motion power spectral analysis is applied to a set of unidentified trajectories of feature points representing whole body kinematics. Feature power vectors are extracted from motion power spectra and mapped to a low dimensionality of feature space as motion templates that offer frequency domain signatures to characterise different periodic motions. Recognition of a new instance of periodic motion against pre-stored motion templates is carried out by seeking best motion power spectral similarity. We test this method through nine examples of human periodic motion using MoCap data. The recognition results demonstrate that feature-based spectral analysis allows classification of periodic motions from low-level, un-structured interpretation without recovering underlying kinematics. Contrasting with common structure-based spatio-temporal approaches, this motion-based frequency-domain method avoids a time-consuming recovery of underlying kinematic structures in visual analysis and largely reduces the parameter domain in the presence of human motion irregularities

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Estimating Anthropometric Marker Locations from 3-D LADAR Point Clouds

    Get PDF
    An area of interest for improving the identification portion of the system is in extracting anthropometric markers from a Laser Detection and Ranging (LADAR) point cloud. Analyzing anthropometrics markers is a common means of studying how a human moves and has been shown to provide good results in determining certain demographic information about the subject. This research examines a marker extraction method utilizing principal component analysis (PCA), self-organizing maps (SOM), alpha hulls, and basic anthropometric knowledge. The performance of the extraction algorithm is tested by performing gender classification with the calculated markers

    Emotion estimation in crowds:a machine learning approach

    Get PDF

    Emotion estimation in crowds:a machine learning approach

    Get PDF

    Requirements for digitized aircraft spotting (Ouija) board for use on U.S. Navy aircraft carriers

    Get PDF
    This thesis will evaluate system and process elements to initiate requirements modeling necessary for the next generation Digitized Aircraft Spotting (Ouija) Board for use on U.S. Navy aircraft carriers to track and plan aircraft movement. The research will examine and evaluate the feasibility and suitability of transforming the existing two-dimensional static board to an electronic, dynamic display that will enhance situational awareness by using sensors and system information from various sources to display a comprehensive operational picture of the current flight and hangar decks aboard aircraft carriers. The authors will evaluate the current processes and make recommendations on elements the new system would display. These elements include what information is displayed, which external systems feed information to the display, and how intelligent agents could be used to transform the static display to a powerful decision support tool. Optimally, the Aircraft Handler will use this system to effectively manage the Flight and Hangar decks to support the projection of air power from U.S. aircraft carriers.http://archive.org/details/requirementsford109454447Lieutenant Commander, United States NavyLieutenant Commander, United States Navy ReserveApproved for public release; distribution is unlimited

    Cardiac organoid technology and computational processing of cardiac physiology for advanced drug screening applications

    Get PDF
    Stem cell technology has gained considerable recognition since its inception to advance disease modeling and drug screening. This is especially true for tissues that are difficult to study due to tissue sensitivity and limited regenerative capacity, such as the heart. Previous work in stem cell-derived cardiac tissue has exploited how we can engineer biologically functional heart tissue by providing the appropriate external stimuli to facilitate tissue development. The goal of this dissertation is to explore the potentials of stem cell cardiac organoid models to recapitulate heart development and implement analytical computational tools to study cardiac physiology. These new tools were implemented as potential advancements in drug screening applications for better predictions of drug-related cardiotoxicity. Cardiac organoids, generated via micropatterning techniques, were explored to determine how controlling engineering parameters, specifically the geometry, direct tissue fate and organoid function. The advantage of cardiac organoid models is the ability to recapitulate and study human tissue morphogenesis and development, which has currently been restricted through animal models. The cardiac organoids demonstrated responsiveness manifested as impairments to tissue formation and contractile functions as a result of developmental drug toxicity. Single-cell genomic characterization of cardiac organoids unveiled a co-emergence of cardiac and endoderm tissue, which is seen in vivo through paracrine signaling between the liver and heart. We then implemented computational tools based on nonlinear mathematical analysis to evaluate the cardiac physiological drug response of stem cell-derived cardiomyocytes. This dissertation discusses in vitro tissue platforms as well as computational tools to study drug-induced cardiotoxicity. Using these tools, we can extend current toolboxes of understanding cardiac physiology for advanced investigations of stem-cell based cardiac tissue engineering

    A Survey of Computer Graphics Facial Animation Methods: Comparing Traditional Approaches to Machine Learning Methods

    Get PDF
    Human communications rely on facial expression to denote mood, sentiment, and intent. Realistic facial animation of computer graphic models of human faces can be difficult to achieve as a result of the many details that must be approximated in generating believable facial expressions. Many theoretical approaches have been researched and implemented to create more and more accurate animations that can effectively portray human emotions. Even though many of these approaches are able to generate realistic looking expressions, they typically require a lot of artistic intervention to achieve a believable result. To reduce the intervention needed to create realistic facial animation, new approaches that utilize machine learning are being researched to reduce the amount of effort needed to generate believable facial animations. This survey paper summarizes over 20 research papers related to facial animation and compares the traditional animation approaches to newer machine learning methods as well as highlights the strengths, weaknesses, and use cases of each different approach
    • …
    corecore