11 research outputs found

    Complexity-based measures inform tai chi’s impact on standing postural control in older adults with peripheral neuropathy

    Get PDF
    Background: Tai Chi training enhances physical function and may reduce falls in older adults with and without balance disorders, yet its effect on postural control as quantified by the magnitude or speed of center-of-pressure (COP) excursions beneath the feet is less clear. We hypothesized that COP metrics derived from complex systems theory may better capture the multi-component stimulus that Tai Chi has on the postural control system, as compared with traditional COP measures. Methods: We performed a secondary analysis of a pilot, non-controlled intervention study that examined the effects of Tai Chi on standing COP dynamics, plantar sensation, and physical function in 25 older adults with peripheral neuropathy. Tai Chi training was based on the Yang style and consisted of three, one-hour group sessions per week for 24 weeks. Standing postural control was assessed with a force platform at baseline, 6, 12, 18, and 24 weeks. The degree of COP complexity, as defined by the presence of fluctuations existing over multiple timescales, was calculated using multiscale entropy analysis. Traditional measures of COP speed and area were also calculated. Foot sole sensation, six-minute walk (6MW) and timed up-and-go (TUG) were also measured at each assessment. Results: Traditional measures of postural control did not change from baseline. The COP complexity index (mean±SD) increased from baseline (4.1±0.5) to week 6 (4.5±0.4), and from week 6 to week 24 (4.7±0.4) (p=0.02). Increases in COP complexity—from baseline to week 24—correlated with improvements in foot sole sensation (p=0.01), the 6MW (p=0.001) and TUG (p=0.01). Conclusions: Subjects of the Tai Chi program exhibited increased complexity of standing COP dynamics. These increases were associated with improved plantar sensation and physical function. Although more research is needed, results of this non-controlled pilot study suggest that complexity-based COP measures may inform the study of complex mind-body interventions, like Tai Chi, on postural control in those with peripheral neuropathy or other age-related balance disorders

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNet® convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNet® model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery

    Large-scale annotated dataset for cochlear hair cell detection and classification

    No full text
    Our sense of hearing is mediated by cochlear hair cells, localized within the sensory epithelium called the organ of Corti. There are two types of hair cells in the cochlea, which are organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains a few thousands of hair cells, and their survival is essential for our perception of sound because they are terminally differentiated and do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. However, the sheer number of cells along the cochlea makes manual quantification impractical. Machine learning can be used to overcome this challenge by automating the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, human, pig and guinea pig cochlear tissue, from normal conditions and following in-vivo and in-vitro ototoxic drug application. The dataset includes over 90,000 hair cells, all of which have been manually identified and annotated as one of two cell types: inner hair cells and outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to supply other groups within the hearing research community with the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease

    Large-scale annotated dataset for cochlear hair cell detection and classification

    No full text
    <p>Our sense of hearing is mediated by cochlear hair cells, of which there are two types organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains 5 - 15 thousand terminally differentiated hair cells, and their survival is essential for hearing as they do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. Machine learning can be used to automate the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, rat, guinea pig, pig, primate, and human cochlear tissue, from normal conditions and following <i>in-vivo</i> and <i>in-vitro</i>ototoxic drug application. The dataset includes over 107,000 hair cells which have been manually identified and annotated as either inner or outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair-cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to give other hearing research groups the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease. </p><p>Associated code is provided here: https://github.com/indzhykulianlab/hcat-data</p&gt

    Large-scale annotated dataset for cochlear hair cell detection and classification

    No full text
    <p>Our sense of hearing is mediated by cochlear hair cells, of which there are two types organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains 5 - 15 thousand terminally differentiated hair cells, and their survival is essential for hearing as they do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. Machine learning can be used to automate the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, rat, guinea pig, pig, primate, and human cochlear tissue, from normal conditions and following <i>in-vivo</i> and <i>in-vitro</i>ototoxic drug application. The dataset includes over 107,000 hair cells which have been manually identified and annotated as either inner or outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair-cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to give other hearing research groups the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease. </p><p>Associated code is provided here: https://github.com/indzhykulianlab/hcat-data</p&gt

    Large-scale annotated dataset for cochlear hair cell detection and classification

    Get PDF
    Our sense of hearing is mediated by cochlear hair cells, of which there are two types organized in one row of inner hair cells and three rows of outer hair cells. Each cochlea contains 5-15 thousand terminally differentiated hair cells, and their survival is essential for hearing as they do not regenerate after insult. It is often desirable in hearing research to quantify the number of hair cells within cochlear samples, in both pathological conditions, and in response to treatment. Machine learning can be used to automate the quantification process but requires a vast and diverse dataset for effective training. In this study, we present a large collection of annotated cochlear hair-cell datasets, labeled with commonly used hair-cell markers and imaged using various fluorescence microscopy techniques. The collection includes samples from mouse, rat, guinea pig, pig, primate, and human cochlear tissue, from normal conditions and following in-vivo and in-vitro ototoxic drug application. The dataset includes over 107,000 hair cells which have been identified and annotated as either inner or outer hair cells. This dataset is the result of a collaborative effort from multiple laboratories and has been carefully curated to represent a variety of imaging techniques. With suggested usage parameters and a well-described annotation procedure, this collection can facilitate the development of generalizable cochlear hair-cell detection models or serve as a starting point for fine-tuning models for other analysis tasks. By providing this dataset, we aim to give other hearing research groups the opportunity to develop their own tools with which to analyze cochlear imaging data more fully, accurately, and with greater ease

    Progression of Geographic Atrophy in Age-related Macular Degeneration

    No full text
    corecore