7 research outputs found

    Development of a prototype sensor-integrated urine bag for real-time measuring.

    Get PDF
    The urine output is a rapid bedside test for kidney function, and reduced output is the common biomarker for an acute kidney injury (AKI). The consensus definition of the symptom is used urine output <0.5 ml/kg/hour for ≥6 hours to define AKI. If a patient is suspected to have this problem, the urine output monitoring needs to be done hourly, and this task consumes a lot of time, and easily affected by human errors. Moreover, available evidences in literatures indicate that more frequent patient monitoring could impact clinical decision making and patient’s outcome. However, it is not possible for nurses to dedicate their precious time manually up to minute manually measurements. To date, there is no reliable device has been used in the clinical routine. From the literatures, only a few automated devices were found with the ability to automatically monitor urine outputs, and could reduce nurse workload and at the same time enhance work performance, but these still have some limitations to measure human urine. In this thesis presents the development and testing for such a device. The research was aimed at building a prototype that could be measured a small amount of urine output, and transit information via wireless to a Cloud database with inexpensive and less complex components. The concept is to provide a real-time measurement and generates data records in Cloud database without requiring any intervention by the nurse. The initial experiment was done measure small amount of liquid using a dropvolume calculation technique. An optical sensor was placed in a medical dropper to record number of counted-drops, the Mean Absolute Percent Error from the test is reported ±3.96% for measuring 35 ml of liquid compared with the ISO standard. The second prototype was developed with multi-sensors, including photo interrupter sensor, infrared proximity sensor, and ultrasonic sensor, to detect the dripping and urine flow. However, the optical sensor still provided the most accuracy of all. The final prototype is based on the combination of optical sensor for detecting drops to calculated urine flow rate and its volume, and weight scales to measurement the weight of collected urine in a commercial urine meter. The prototype also provides an alert in two scenarios; when the urine production is not met the goals, and when the urine container is almost full, the system will automatically generate alarms that warn the nurse. Series of experimentation tests have been conducted under consultant of medical professional to verify the proper operation and accuracy in the measurement. The results are improved from the previous prototype. The mean error found of this version is 1.975% or ≈ ±1.215 ml. when measure 35ml of urine under the average density value of urine (1.020). These tests confirm the potential application of the device by assisting nurse to monitor urine output with the accuracy in the measurement. The use of the Cloud based technology has not been previously reported in the literature as far as can be ascertained. These results illustrated the capability, suitability and limitation of the chosen technology

    Data_Sheet_4_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p

    Data_Sheet_3_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p

    Data_Sheet_6_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p

    Data_Sheet_1_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p

    Data_Sheet_2_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p

    Data_Sheet_5_Classification of elderly pain severity from automated video clip facial action unit analysis: A study from a Thai data repository.PDF

    No full text
    Data from 255 Thais with chronic pain were collected at Chiang Mai Medical School Hospital. After the patients self-rated their level of pain, a smartphone camera was used to capture faces for 10 s at a one-meter distance. For those unable to self-rate, a video recording was taken immediately after the move that causes the pain. The trained assistant rated each video clip for the pain assessment in advanced dementia (PAINAD). The pain was classified into three levels: mild, moderate, and severe. OpenFace© was used to convert the video clips into 18 facial action units (FAUs). Five classification models were used, including logistic regression, multilayer perception, naïve Bayes, decision tree, k-nearest neighbors (KNN), and support vector machine (SVM). Out of the models that only used FAU described in the literature (FAU 4, 6, 7, 9, 10, 25, 26, 27, and 45), multilayer perception is the most accurate, at 50%. The SVM model using FAU 1, 2, 4, 7, 9, 10, 12, 20, 25, and 45, and gender had the best accuracy of 58% among the machine learning selection features. Our open-source experiment for automatically analyzing video clips for FAUs is not robust for classifying pain in the elderly. The consensus method to transform facial recognition algorithm values comparable to the human ratings, and international good practice for reciprocal sharing of data may improve the accuracy and feasibility of the machine learning's facial pain rater.</p
    corecore