42 research outputs found

    A Miniaturized Video System for Monitoring Drosophila Behavior

    Get PDF
    Long-term spaceflight may induce a variety of harmful effects in astronauts, resulting in altered motor and cognitive behavior. The stresses experienced by humans in space - most significantly weightlessness (microgravity) and cosmic radiation - are difficult to accurately simulate on Earth. In fact, prolonged and concomitant exposure to microgravity and cosmic radiation can only be studied in space. Behavioral studies in space have focused on model organisms, including Drosophila melanogaster. Drosophila is often used due to its short life span and generational cycle, small size, and ease of maintenance. Additionally, the well-characterized genetics of Drosophila behavior on Earth can be applied to the analysis of results from spaceflights, provided that the behavior in space is accurately recorded. In 2001, the BioExplorer project introduced a low-cost option for researchers: the small satellite. While this approach enabled multiple inexpensive launches of biological experiments, it also imposed stringent restrictions on the monitoring systems in terms of size, mass, data bandwidth, and power consumption. Suggested parameters for size are on the order of 100 mm3 and 1 kg mass for the entire payload. For Drosophila behavioral studies, these engineering requirements are not met by commercially available systems. One system that does meet many requirements for behavioral studies in space is the actimeter. Actimeters use infrared light gates to track the number of times a fly crosses a boundary within a small container (3x3x40 mm). Unfortunately, the apparatus needed to monitor several flies at once would be larger than the capacity of the small satellite. A system is presented, which expands on the actimeter approach to achieve a highly compact, low-power, ultra-low bandwidth solution for simultaneous monitoring of the behavior of multiple flies in space. This also provides a simple, inexpensive alternative to the current systems for monitoring Drosophila populations in terrestrial experiments, and could be especially useful in field experiments in remote locations. Two practical limitations of the system should be noted: first, only walking flies can be observed - not flying - and second, although it enables population studies, tracking individual flies within the population is not currently possible. The system used video recording and an analog circuit to extract the average light changes as a function of time. Flies were held in a 5-cm diameter Petri dish and illuminated from below by a uniform light source. A miniature, monochrome CMOS (complementary metal-oxide semiconductor) video camera imaged the flies. This camera had automatic gain control, and this did not affect system performance. The camera was positioned 5-7 cm above the Petri dish such that the imaging area was 2.25 sq cm. With this basic setup, still images and continuous video of 15 flies at one time were obtained. To reduce the required data bandwidth by several orders of magnitude, a band-pass filter (0.3-10 Hz) circuit compressed the video signal and extracted changes in image luminance over time. The raw activity signal output of this circuit was recorded on a computer and digitally processed to extract the fly movement "events" from the waveform. These events corresponded to flies entering and leaving the image and were used for extracting activity parameters such as inter-event duration. The efficacy of the system in quantifying locomotor activity was evaluated by varying environmental temperature, then measuring the activity level of the flies

    Toward Continuous, Noninvasive Assessment of Ventricular Function and Hemodynamics: Wearable Ballistocardiography

    Full text link
    Ballistocardiography, the measurement of the reaction forces of the body to cardiac ejection of blood, is one of the few techniques available for unobtrusively assessing the mechanical aspects of cardiovascular health outside clinical settings. Recently, multiple experimental studies involving healthy subjects and subjects with various cardiovascular diseases have demonstrated that the ballistocardiogram (BCG) signal can be used to trend cardiac output, contractility, and beat-by-beat ventricular function for arrhythmias. The majority of these studies has been performed with "fixed" BCG instrumentation-such as weighing scales or chairs-rather than wearable measurements. Enabling wearable, and thus continuous, recording of BCG signals would greatly expand the capabilities of the technique; however, BCG signals measured using wearable devices are morphologically dissimilar to measurements from "fixed" instruments, precluding the analysis and interpretation techniques from one domain to be applied to the other. In particular, the time intervals between the electrocardiogram (ECG) and BCG-namely, the R-J interval, a surrogate for measuring contractility changes-are significantly different for the accelerometer compared to a "fixed" BCG measurement. This paper addresses this need for quantitatively normalizing wearable BCG measurement to "fixed" measurements with a systematic experimental approach. With these methods, the same analysis and interpretation techniques developed over the past decade for "fixed" BCG measurement can be successfully translated to wearable measurements

    A multi-stage machine learning model on diagnosis of esophageal manometry

    Full text link
    High-resolution manometry (HRM) is the primary procedure used to diagnose esophageal motility disorders. Its interpretation and classification includes an initial evaluation of swallow-level outcomes and then derivation of a study-level diagnosis based on Chicago Classification (CC), using a tree-like algorithm. This diagnostic approach on motility disordered using HRM was mirrored using a multi-stage modeling framework developed using a combination of various machine learning approaches. Specifically, the framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage. In the swallow-level stage, three models based on convolutional neural networks (CNNs) were developed to predict swallow type, swallow pressurization, and integrated relaxation pressure (IRP). At the study-level stage, model selection from families of the expert-knowledge-based rule models, xgboost models and artificial neural network(ANN) models were conducted, with the latter two model designed and augmented with motivation from the export knowledge. A simple model-agnostic strategy of model balancing motivated by Bayesian principles was utilized, which gave rise to model averaging weighted by precision scores. The averaged (blended) models and individual models were compared and evaluated, of which the best performance on test dataset is 0.81 in top-1 prediction, 0.92 in top-2 predictions. This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data. Moreover, the proposed modeling framework could be easily extended to multi-modal tasks, such as diagnosis of esophageal patients based on clinical data from both HRM and functional luminal imaging probe panometry (FLIP)

    Deep Learning for Distinguishing Normal versus Abnormal Chest Radiographs and Generalization to Unseen Diseases

    Full text link
    Chest radiography (CXR) is the most widely-used thoracic clinical imaging modality and is crucial for guiding the management of cardiothoracic conditions. The detection of specific CXR findings has been the main focus of several artificial intelligence (AI) systems. However, the wide range of possible CXR abnormalities makes it impractical to build specific systems to detect every possible condition. In this work, we developed and evaluated an AI system to classify CXRs as normal or abnormal. For development, we used a de-identified dataset of 248,445 patients from a multi-city hospital network in India. To assess generalizability, we evaluated our system using 6 international datasets from India, China, and the United States. Of these datasets, 4 focused on diseases that the AI was not trained to detect: 2 datasets with tuberculosis and 2 datasets with coronavirus disease 2019. Our results suggest that the AI system generalizes to new patient populations and abnormalities. In a simulated workflow where the AI system prioritized abnormal cases, the turnaround time for abnormal cases reduced by 7-28%. These results represent an important step towards evaluating whether AI can be safely used to flag cases in a general setting where previously unseen abnormalities exist

    ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders

    Full text link
    Our approach, which we call Embeddings for Language/Image-aligned X-Rays, or ELIXR, leverages a language-aligned image encoder combined or grafted onto a fixed LLM, PaLM 2, to perform a broad range of tasks. We train this lightweight adapter architecture using images paired with corresponding free-text radiology reports from the MIMIC-CXR dataset. ELIXR achieved state-of-the-art performance on zero-shot chest X-ray (CXR) classification (mean AUC of 0.850 across 13 findings), data-efficient CXR classification (mean AUCs of 0.893 and 0.898 across five findings (atelectasis, cardiomegaly, consolidation, pleural effusion, and pulmonary edema) for 1% (~2,200 images) and 10% (~22,000 images) training data), and semantic search (0.76 normalized discounted cumulative gain (NDCG) across nineteen queries, including perfect retrieval on twelve of them). Compared to existing data-efficient methods including supervised contrastive learning (SupCon), ELIXR required two orders of magnitude less data to reach similar performance. ELIXR also showed promise on CXR vision-language tasks, demonstrating overall accuracies of 58.7% and 62.5% on visual question answering and report quality assurance tasks, respectively. These results suggest that ELIXR is a robust and versatile approach to CXR AI

    Evaluation and Mitigation of Racial Bias in Clinical Machine Learning Models: Scoping Review

    No full text
    BackgroundRacial bias is a key concern regarding the development, validation, and implementation of machine learning (ML) models in clinical settings. Despite the potential of bias to propagate health disparities, racial bias in clinical ML has yet to be thoroughly examined and best practices for bias mitigation remain unclear. ObjectiveOur objective was to perform a scoping review to characterize the methods by which the racial bias of ML has been assessed and describe strategies that may be used to enhance algorithmic fairness in clinical ML. MethodsA scoping review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) Extension for Scoping Reviews. A literature search using PubMed, Scopus, and Embase databases, as well as Google Scholar, identified 635 records, of which 12 studies were included. ResultsApplications of ML were varied and involved diagnosis, outcome prediction, and clinical score prediction performed on data sets including images, diagnostic studies, clinical text, and clinical variables. Of the 12 studies, 1 (8%) described a model in routine clinical use, 2 (17%) examined prospectively validated clinical models, and the remaining 9 (75%) described internally validated models. In addition, 8 (67%) studies concluded that racial bias was present, 2 (17%) concluded that it was not, and 2 (17%) assessed the implementation of bias mitigation strategies without comparison to a baseline model. Fairness metrics used to assess algorithmic racial bias were inconsistent. The most commonly observed metrics were equal opportunity difference (5/12, 42%), accuracy (4/12, 25%), and disparate impact (2/12, 17%). All 8 (67%) studies that implemented methods for mitigation of racial bias successfully increased fairness, as measured by the authors’ chosen metrics. Preprocessing methods of bias mitigation were most commonly used across all studies that implemented them. ConclusionsThe broad scope of medical ML applications and potential patient harms demand an increased emphasis on evaluation and mitigation of racial bias in clinical ML. However, the adoption of algorithmic fairness principles in medicine remains inconsistent and is limited by poor data availability and ML model reporting. We recommend that researchers and journal editors emphasize standardized reporting and data availability in medical ML studies to improve transparency and facilitate evaluation for racial bias
    corecore