46 research outputs found

    Creating smarter teaching and training environments: innovative set-up for collaborative hybrid learning

    Get PDF
    This paper brings together previous work from a number of research projects and teaching initiatives in an effort to introduce good practice in setting up supportive environments for collaborative learning. The paper discusses prior use of social media in learning support, the role of dashboards for learning analytics in Global Software Development training, the use of optical head-mounted displays for feedback and the use of NodeXl visualization in managing distributed teams. The scope of the paper is to provide a structured approach in organizing the creation of smarter teaching and training environments and explore ways to coordinate learning scenarios with the use of various techniques. The paper also discusses challenges from integrating multiple innovative features in educational contexts. Finally the paper attempts to investigate the use of smart laboratories in establishing additional learning support and gather primary data from blended and hybrid learning pilot studies

    Can We Calculate Mean Arterial Pressure in Humans?

    Get PDF
    Mean arterial pressure (MAP) is either measured with an oscillometric cuff and then systolic (SBP) and diastolic (DBP) blood pressures are estimated from an unknown algorithm; or SBP and DBP are measured via auscultation and MAP calculated using measures of systolic pressure (SBP), diastolic pressure (DBP), and a form-factor (FF; equation: [(SBP-DBP)*FF]+DBP). The typical FF used is 0.33 though others (0.4) have been proposed. Recent work indicates that estimation of aortic MAP via a FF leads to inaccurate values and should therefore be interpreted with caution, whether this is the case for local MAP is unknown. While the implications for hypertension (HTN) diagnosis are minimal, the calculation of local MAP is essential to the study of blood pressure regulation and exercise hemodynamics in patient populations (e.g. heart failure). PURPOSE: To compare the calculation of local MAP using catheter waveforms and a FF, against MAP derived from the pressure-time integral (PTI; i.e. average pressure across the cardiac cycle) measured via radial arterial catheterization. METHODS: We analyzed radial arterial catheter waveforms from 39 patients (Age: 71±7 years; BMI: 38.4±6.7; Female: 66%; HTN prevalence: 97%) with heart failure with preserved ejection fraction (HFpEF) at rest and during cycling exercise at 20 Watts. We compared the PTI (from the catheter waveform) with the calculation of MAP from the peak and nadir of the same waveforms (5-beat averages) using the 0.33 and 0.4 FF’s in the FF equation. RESULTS: Compared to the PTI (91±13 mmHg), resting MAP was not significantly different when calculated using the 0.33 FF (91±11 mmHg, P\u3e0.999) but was higher when using the 0.4 FF (96±12 mmHg, PCONCLUSION:While the 0.33 FF provides an accurate assessment of MAP on average during rest and exercise in the radial artery in patients with HFpEF, the limits of agreement are large reflecting a lack of precision in measurement at an individual level. Indirect calculations of MAP via a FF may lead to inaccurate conclusions regarding the mechanisms of blood pressure regulation both at rest and during exercise testing in this population

    Uniformity of imaging spectrometry data products

    Full text link

    Automatic calibration and correction scheme for APEX (Airborne Prism Experiment)

    Full text link
    Hyperspectral sensors provide a large amount of both spatial and spectral information. Calibration plays an important role in the efficient use of such a rich data source. However, calibration is extremely time consuming if undertaken with traditional strategies. Recent studies demonstrated that various non-uniformities, and detector imperfections drastically affect the hyperspectral data quality if not known and corrected for. The APEX (Airborne Prism Experiment) spectrometer adopts an automatic calibration and characterization strategy with the ultimate goal of providing scientific products of very high accuracy. This strategy relies on the control test master (CTM), an advanced software/hardware equipment able to control independently the instrumentation, and to process online or offline the large amount of data acquired to characterize such a sophisticated instrument. Those data, once processed by the master processor, will generate several coefficients that in turn will feed the processing and archiving facility (PAF), a software module that calibrates the acquired scenes, and corrects for artefacts and non-uniformities

    A Faceoff with Hazardous Noise: Noise Exposure and Hearing Threshold Shifts of Indoor Hockey Officials

    No full text
    Noise exposure and hearing thresholds of indoor hockey officials of the Western States Hockey League were measured to assess the impact of hockey game noise on hearing sensitivity. Twenty-nine hockey officials who officiated the league in an arena in southeastern Wyoming in October, November, and December 2014 participated in the study. Personal noise dosimetry was conducted to determine if officials were exposed to an equivalent sound pressure level greater than 85 dBA. Hearing thresholds were measured before and after hockey games to determine if a 10 dB or greater temporary threshold shift in hearing occurred. Pure-tone audiometry was conducted in both ears at 500, 1000, 2000, 3000, 4000, 6000, and 8000 Hz. All noise exposures were greater than 85 dBA, with a mean personal noise exposure level of 93 dBA (SD = 2.2), providing 17.7% (SD = 6.3) of the officials\u27 daily noise dose according to the OSHA criteria. Hearing threshold shifts of 10 dB or greater were observed in 86.2% (25/29) of officials, with 36% (9/25) of those threshold shifts equaling 15 dB or greater. The largest proportion of hearing threshold shifts occurred at 4000 Hz, comprising 35.7% of right ear shifts and 31.8% of left ear shifts. The threshold shifts between the pre- and post-game audiometry were statistically significant in the left ear at 500 (p=.019), 2000 (p=.0009), 3000 (p\u3c.0001) and 4000 Hz (p=.0002), and in the right ear at 2000 (p=.0001), 3000 (p=.0001) and 4000 Hz (p\u3c.0001), based on Wilcoxon-ranked sum analysis. Although not statistically significant at alpha = 0.05, logistic regression indicated that with each increase of one dB of equivalent sound pressure measured from personal noise dosimetry, the odds of a ≥ 10 dB TTS were increased in the left ear at 500 (OR=1.33, 95% CI 0.73-2.45), 3000 (OR=1.02, 95% CI 0.68-1.51), 4000 (OR=1.26, 95% CI 0.93-1.71) and 8000 Hz (OR=1.22, 95% CI 0.76-1.94) and in the right ear at 6000 (OR=1.03, 95% CI 0.14-7.84) and 8000 Hz (OR=1.29, 95% CI 0.12-13.83). These findings suggest that indoor hockey officials are exposed to hazardous levels of noise, experience temporary hearing loss after officiating games, and a hearing conservation program is warranted. Further temporary threshold shift research has the potential to identify officials of other sporting events that are at an increased risk of noise-induced hearing loss

    Cluster versus grid for large-volume hyperspectral image preprocessing

    No full text
    The handling of satellite or airborne earth observation data for scientific applications minimally requires pre-processing to convert raw digital numbers into scientific units. However depending on sensor characteristics and architecture, additional work may be needed to achieve spatial and/or spectral uniformity. Standard higher level processing also typically involves providing orthorectification and atmospheric correction. Fortunately some of the computational tasks required to perform radiometric and geometric calibration can be decomposed into highly independent subtasks making this processing highly parallelizable. Such "embarrassingly parallel" problems provide the luxury of being able to choose between cluster or grid based solutions to perform these functions. Perhaps the most convenient solutions are grid-based, since most research groups making these kinds of measurements are likely to have access to a LAN whose spare computing resources could be non-obtrusively employed in a grid. However, since many higher level scientific applications of earth observation data might be composed of more highly interdependent subtasks, the parallel computing resources allocated for these tasks might also be made available for low level pre-processing as well. We look at two modules developed for our prototype data calibration processor for APEX, an airborne imaging spectrometer, which have been implemented on both a cluster and a grid leading us to be able to make observations and comparisons of the two approache
    corecore