385 research outputs found

    Generating nonverbal indicators of deception in virtual reality training

    Get PDF
    Old Dominion University (ODU) has been performing research in the area of training using virtual environments. The research involves both computer controlled agents and human participants taking part in a peacekeeping scenario whereby various skills-based tasks are trained and evaluated in a virtual environment. The scenario used is a checkpoint operation in a typical third world urban area. The trainee is presented with innocuous encounters until a slightly noticeable but highly important change surfaces and the trainee must react in an appropriate fashion or risk injury to himself or his teammate. Although the tasks are mainly skill-based, many are closely related to a judgment that the trainee must make. In fact, judgment-based tasks are becoming prevalent and are also far more difficult to train and not well understood. Of interest is an understanding of these additional constraints encountered that illicit emotional response in judgment-based military scenarios. This paper describes ongoing research in creating affective component behaviors used to convey cues for anger, nervousness, and deception in Operations Other than War (OOTW) training

    Automated extraction of knowledge for model-based diagnostics

    Get PDF
    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools

    An Integrated Computer-Aided Robotic System for Dental Implantation

    Get PDF
    This paper describes an integrated system for dental implantation including both preoperative planning utilizing computer-aided technology and automatic robot operation during the intra-operative stage. A novel two-step registration procedure was applied for transforming the preoperative plan to the operation of the robot, with the help of a Coordinate Measurement Machine (CMM). Experiments with a patient-specific phantom were carried out to evaluate the registration error for both position and orientation. After adopting several improvements, registration accuracy of the system was significantly improved. Sub-millimeter accuracy with the Target Registration Errors (TREs) of 0.38±0.16 mm (N=5) was achieved. The target orientation errors after registration and after phantom drilling were 0.92±0.16 ° (N=5) and 1.99±1.27 ° (N=14), respectively. These results permit the ultimate goal of an automated robotic system for dental implantation

    Orientation Invariant ECG-Based Stethoscope Tracking for Heart Auscultation Training on Augmented Standardized Patients

    Get PDF
    Auscultation, the act of listening to the heart and lung sounds, can reveal substantial information about patients’ health and other cardiac-related problems; therefore, competent training can be a key for accurate and reliable diagnosis. Standardized patients (SPs), who are healthy individuals trained to portray real patients, have been extensively used for such training and other medical teaching techniques; however, the range of symptoms and conditions they can simulate remains limited since they are only patient actors. In this work, we describe a novel tracking method for placing virtual symptoms in correct auscultation areas based on recorded ECG signals with various stethoscope diaphragm orientations; this augmented reality simulation would extend the capabilities of SPs and allow medical trainees to hear abnormal heart and lung sounds in a normal SP. ECG signals recorded from two different SPs over a wide range of stethoscope diaphragm orientations were processed and analyzed to accurately distinguish four different heart auscultation areas, aortic, mitral, pulmonic and tricuspid, for any stethoscope’s orientation. After processing the signals and extracting relevant features, different classifiers were applied for assessment of the proposed method; 95.1% and 87.1% accuracy were obtained for SP1 and SP2, respectively. The proposed system provides an efficient, non-invasive, and cost efficient method for training medical practitioners on heart auscultation

    Center-to-Limb Variation of the Polarization of Mg II H & K Lines as Measured by CLASP2

    Get PDF
    Who cares? Magnetograms in the upper chromosphere are needed for accurate magnetic coronal extrapolations. The CLASP2 sounding rocket took spatially resolved spectropolarimetric data of Mg II h & k in the upper chromosphere, that can be used as a pathfinder to routine magnetograms. This work: Preliminary results of the center-to-limb variation (CLV) of the linear polarization in the quiet sun. We compare the signals to recent theoretical calculations of the expected polarization which include PRD, J-state interference, and magneto-optical effects

    Deep Models for Engagement Assessment With Scarce Label Information

    Get PDF
    Task engagement is defined as loadings on energetic arousal (affect), task motivation, and concentration (cognition) [1]. It is usually challenging and expensive to label cognitive state data, and traditional computational models trained with limited label information for engagement assessment do not perform well because of overfitting. In this paper, we proposed two deep models (i.e., a deep classifier and a deep autoencoder) for engagement assessment with scarce label information. We recruited 15 pilots to conduct a 4-h flight simulation from Seattle to Chicago and recorded their electroencephalograph (EEG) signals during the simulation. Experts carefully examined the EEG signals and labeled 20 min of the EEG data for each pilot. The EEG signals were preprocessed and power spectral features were extracted. The deep models were pretrained by the unlabeled data and were fine-tuned by a different proportion of the labeled data (top 1%, 3%, 5%, 10%, 15%, and 20%) to learn new representations for engagement assessment. The models were then tested on the remaining labeled data. We compared performances of the new data representations with the original EEG features for engagement assessment. Experimental results show that the representations learned by the deep models yielded better accuracies for the six scenarios (77.09%, 80.45%, 83.32%, 85.74%, 85.78%, and 86.52%), based on different proportions of the labeled data for training, as compared with the corresponding accuracies (62.73%, 67.19%, 73.38%, 79.18%, 81.47%, and 84.92%) achieved by the original EEG features. Deep models are effective for engagement assessment especially when less label information was used for training

    Automatic Diagnosis for Prostate Cancer Using Run-Length Matrix Method

    Get PDF
    Prostate cancer is the most common type of cancer and the second leading cause of cancer death among men in US1. Quantitative assessment of prostate histology provides potential automatic classification of prostate lesions and prediction of response to therapy. Traditionally, prostate cancer diagnosis is made by the analysis of prostate-specific antigen (PSA) levels and histopathological images of biopsy samples under microscopes. In this application, we utilize a texture analysis method based on the run-length matrix for identifying tissue abnormalities in prostate histology. A tissue sample was collected from a radical prostatectomy, H&E fixed, and assessed by a pathologist as normal tissue or prostatic carcinoma (PCa). The sample was then subsequently digitized at 50X magnification. We divided the digitized image into sub-regions of 20 X 20 pixels and classified each sub-region as normal or PCa by a texture analysis method. In the texture analysis, we computed texture features for each of the sub-regions based on the Gray-level Run-length Matrix(GL-RLM). Those features include LGRE, HGRE and RPC from the run-length matrix, mean and standard deviation of the pixel intensity. We utilized a feature selection algorithm to select a set of effective features and used a multi-layer perceptron (MLP) classifier to distinguish normal from PCa. In total, the whole histological image was divided into 42 PCa and 6280 normal regions. Three-fold cross validation results show that the proposed method achieves an average classification accuracy of 89.5% with a sensitivity and specificity of 90.48% and 89.49%, respectively

    Design and Comparison of Immersive Interactive Learning and Instructional Techniques for 3D Virtual Laboratories

    Get PDF
    This work presents the design, development, and testing of 3D virtual laboratories for practice, specifically in undergraduate mechanical engineering laboratories. The 3D virtual laboratories, implemented under two virtual environments3DTV and Computer Automated Virtual Environment (CAVE)serve as pre-lab sessions performed before the actual physical laboratory experiment. The current study compares the influence of two instructional methods (conventional lecture-based and inquiry-based) under two virtual environments, and the results are compared with the pre-lab sessions using a traditional paper-based lab manual. Subsequently, the evaluation is done by conducting performance and quantitative assessments from students pre-and post-laboratory performances. The research results demonstrate that students in the virtual modules (3DTV and CAVE) performed significantly better in the actual physical experiment than the students in the control group in terms of the overall experiment familiarity and procedure and the conceptual knowledge associated with the experiment. 2015 by the Massachusetts Institute of Technology

    Engineering Collaborations in Medical Modeling and Simulation

    Get PDF
    Fifty years ago computer science was just beginning to see common acceptance as a growing discipline and very few universities had a computer science department although other departments were utilizing computers and software to enhance their methodologies. We believe modeling and simulation (M&S) is on a similar path. Many other disciplines utilize M&S to enhance their methodologies but we also believe that M&S fundamentals can be essential in making better decisions by utilizing the appropriate model for the problem at hand, expanding the solution space through simulation, and understanding it through visualization and proper analyses. After our students learn these fundamentals, we offer the opportunity to apply them to varied application areas. One such application area is medical M&S, which is a broad area involving anatomical modeling, planning and training simulations, image-guided procedures and more. In this paper, we share several research projects involving M&S and the collaborations that make them possible

    Engagement Assessment Using EEG Signals

    Get PDF
    In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance
    corecore