11 research outputs found

    Synthesis and Characterization of Polycarbonate Copolymers Containing Benzoyl Groups on the Side Chain for Scratch Resistance

    Get PDF
    The purpose of this study was to enhance the scratch resistance of polycarbonate copolymer by using 3,3′-dibenzoyl-4,4′-dihydroxybiphenyl (DBHP) monomer, containing benzoyl moieties on the ortho positions. DBHP monomer was synthesized from 4,4′-dihydroxybiphenyl and benzoyl chloride, followed by the Friedel-Craft rearrangement reaction with AlCl3. The polymerizations were conducted following the low-temperature procedure, which is carried out in methylene chloride by using triphosgene, triethylamine, bisphenol-A, and DBHP. The chemical structures of the polycarbonate copolymers were confirmed by 1H-NMR. The thermal properties of copolymers were investigated by thermogravimetric analysis and differential scanning calorimetry, and also surface morphologies were assessed by atomic force microscopy. The scratch resistance of homopolymer film (100 μm) changed from 6B to 1B, and the contact angle of a sessile water drop onto the homopolymer film also increased

    Siamese Architecture-Based 3D DenseNet with Person-Specific Normalization Using Neutral Expression for Spontaneous and Posed Smile Classification

    No full text
    Clinical studies have demonstrated that spontaneous and posed smiles have spatiotemporal differences in facial muscle movements, such as laterally asymmetric movements, which use different facial muscles. In this study, a model was developed in which video classification of the two types of smile was performed using a 3D convolutional neural network (CNN) applying a Siamese network, and using a neutral expression as reference input. The proposed model makes the following contributions. First, the developed model solves the problem caused by the differences in appearance between individuals, because it learns the spatiotemporal differences between the neutral expression of an individual and spontaneous and posed smiles. Second, using a neutral expression as an anchor improves the model accuracy, when compared to that of the conventional method using genuine and imposter pairs. Third, by using a neutral expression as an anchor image, it is possible to develop a fully automated classification system for spontaneous and posed smiles. In addition, visualizations were designed for the Siamese architecture-based 3D CNN to analyze the accuracy improvement, and to compare the proposed and conventional methods through feature analysis, using principal component analysis (PCA)

    A Real-Time Remote Respiration Measurement Method with Improved Robustness Based on a CNN Model

    No full text
    Human respiration reflects meaningful information, such as one’s health and psychological state. Rates of respiration are an important indicator in medicine because they are directly related to life, death, and the onset of a serious disease. In this study, we propose a noncontact method to measure respiration. Our proposed approach uses a standard RGB camera and does not require any special equipment. Measurement is performed automatically by detecting body landmarks to identify regions of interest (RoIs). We adopt a learning model trained to measure motion and respiration by analyzing movement from RoI images for high robustness to background noise. We collected a remote respiration measurement dataset to train the proposed method and compared its measurement performance with that of representative existing methods. Experimentally, the proposed method showed a performance similar to that of existing methods in a stable environment with restricted motion. However, its performance was significantly improved compared to existing methods owing to its robustness to motion noise. In an environment with partial occlusion and small body movement, the error of the existing methods was 4–8 bpm, whereas the error of our proposed method was around 0.1 bpm. In addition, by measuring the time required to perform each step of the respiration measurement process, we confirmed that the proposed method can be implemented in real time at over 30 FPS using only a standard CPU. Since the proposed approach shows state-of-the-art accuracy with the error of 0.1 bpm in the wild, it can be expanded to various applications, such as medicine, home healthcare, emotional marketing, forensic investigation, and fitness in future research

    A Real-Time Remote Respiration Measurement Method with Improved Robustness Based on a CNN Model

    No full text
    Human respiration reflects meaningful information, such as one’s health and psychological state. Rates of respiration are an important indicator in medicine because they are directly related to life, death, and the onset of a serious disease. In this study, we propose a noncontact method to measure respiration. Our proposed approach uses a standard RGB camera and does not require any special equipment. Measurement is performed automatically by detecting body landmarks to identify regions of interest (RoIs). We adopt a learning model trained to measure motion and respiration by analyzing movement from RoI images for high robustness to background noise. We collected a remote respiration measurement dataset to train the proposed method and compared its measurement performance with that of representative existing methods. Experimentally, the proposed method showed a performance similar to that of existing methods in a stable environment with restricted motion. However, its performance was significantly improved compared to existing methods owing to its robustness to motion noise. In an environment with partial occlusion and small body movement, the error of the existing methods was 4–8 bpm, whereas the error of our proposed method was around 0.1 bpm. In addition, by measuring the time required to perform each step of the respiration measurement process, we confirmed that the proposed method can be implemented in real time at over 30 FPS using only a standard CPU. Since the proposed approach shows state-of-the-art accuracy with the error of 0.1 bpm in the wild, it can be expanded to various applications, such as medicine, home healthcare, emotional marketing, forensic investigation, and fitness in future research

    Multi-Currency Integrated Serial Number Recognition Model of Images Acquired by Banknote Counters

    No full text
    The objective of this study was to establish an automated system for the recognition of banknote serial numbers by developing a deep learning (DL)-based optical character recognition framework. An integrated serial number recognition model for the banknotes of four countries (South Korea (KRW), the United States (USD), India (INR), and Japan (JPY)) was developed. One-channel image data obtained from banknote counters were used in this study. The dataset used for the multi-currency integrated serial number recognition contains about 150,000 images. The class imbalance problem and model accuracy were improved through data augmentation based on geometric transforms that consider the range of errors that occur when a bill is inserted into the counter. In addition, by fine-tuning the recognition network, it was confirmed that the performance was improved when the serial numbers of the banknotes of four countries were recognized instead of the serial number of a banknote from each country from a single-currency dataset, and the generalization performance was improved by training the model to recognize the diverse serial numbers of multiple currencies. Therefore, the proposed method shows that real-time processing of less than 30 ms per image and character recognition with 99.99% accuracy are possible, even though there is a tradeoff between inference speed and serial number recognition accuracy when data augmentation based on the characteristics of banknote counters and a 1-stage object detector for banknote serial number recognition is used

    Analyzing Facial and Eye Movements to Screen for Alzheimer’s Disease

    No full text
    Brain disease can be screened using eye movements. Degenerative brain disorders change eye movement because they affect not only memory and cognition but also the cranial nervous system involved in eye movement. We compared the facial and eye movement patterns of patients with mild Alzheimer’s disease and cognitively normal people to analyze the neurological signs of dementia. After detecting the facial landmarks, the coordinate values for the movements were extracted. We used Spearman’s correlation coefficient to examine associations between horizontal and vertical facial and eye movements. We analyzed the correlation between facial and eye movements without using special eye-tracking equipment or complex conditions in order to measure the behavioral aspect of the natural human gaze. As a result, we found differences between patients with Alzheimer’s disease and cognitively normal people. Patients suffering from Alzheimer’s disease tended to move their face and eyes simultaneously in the vertical direction, whereas the cognitively normal people did not, as confirmed by a Mann–Whitney–Wilcoxon test. Our findings suggest that objective and accurate measurement of facial and eye movements can be used to screen such patients quickly. The use of camera-based testing for the early detection of patients showing signs of neurodegeneration can have a significant impact on the public care of dementia

    Analysis of Nursing Students’ Nonverbal Communication Patterns during Simulation Practice: A Pilot Study

    No full text
    Therapeutic communication, of which nonverbal communication is a vital component, is an essential skill for professional nurses. The aim of this study is to assess the possibility of incorporating computer analysis programs into nursing education programs to improve the nonverbal communication skills of those preparing to become professional nurses. In this pilot observational study, the research team developed a computer program for nonverbal communication analysis including facial expressions and poses. The video clip data captured during nursing simulation practice by 10 3rd- and 4th-grade nursing students at a university in South Korea involved two scenarios of communication with a child’s mother regarding the child’s pre- and post-catheterization care. The dominant facial expressions varied, with sadness (30.73%), surprise (30.14%), and fear (24.11%) being the most prevalent, while happiness (7.96%) and disgust (6.79%) were less common. The participants generally made eye contact with the mother, but there were no instances of light touch by hand and the physical distance for nonverbal communication situations was outside the typical range. These results confirm the potential use of facial expression and pose analysis programs for communication education in nursing practice

    Studies of Grafted and Sulfonated Spiro Poly(isatin-ethersulfone) Membranes by Super Acid-Catalyzed Reaction

    No full text
    Spiro poly(isatin-ethersulfone) polymers were prepared from isatin and bis-2,6-dimethylphenoxyphenylsulfone by super acid catalyzed polyhydroxyalkylation reactions. We designed and synthesized bis-2,6-dimethylphenoxyphenylsulfone, which is structured at the meta position steric hindrance by two methyl groups, because this structure minimized crosslinking reaction during super acid catalyzed polymerization. In addition, sulfonic acid groups were structured in both side chains and main chains to form better polymer chain morphology and improve proton conductivity. The sulfonation reactions were performed in two steps which are: in 3-bromo-1-propanesulfonic acid potassium salt and in con. sulfuric acid. The membrane morphology was studied by tapping mode atomic force microscope (AFM). The phase difference between the hydrophobic polymer main chain and hydrophilic sulfonated units of the polymer was shown to be the reasonable result of the well phase separated structure. The correlations of proton conductivity, ion exchange capacity (IEC) and single cell performance were clearly described with the membrane morphology

    Differences in Facial Expressions between Spontaneous and Posed Smiles: Automated Method by Action Units and Three-Dimensional Facial Landmarks

    No full text
    Research on emotion recognition from facial expressions has found evidence of different muscle movements between genuine and posed smiles. To further confirm discrete movement intensities of each facial segment, we explored differences in facial expressions between spontaneous and posed smiles with three-dimensional facial landmarks. Advanced machine analysis was adopted to measure changes in the dynamics of 68 segmented facial regions. A total of 57 normal adults (19 men, 38 women) who displayed adequate posed and spontaneous facial expressions for happiness were included in the analyses. The results indicate that spontaneous smiles have higher intensities for upper face than lower face. On the other hand, posed smiles showed higher intensities in the lower part of the face. Furthermore, the 3D facial landmark technique revealed that the left eyebrow displayed stronger intensity during spontaneous smiles than the right eyebrow. These findings suggest a potential application of landmark based emotion recognition that spontaneous smiles can be distinguished from posed smiles via measuring relative intensities between the upper and lower face with a focus on left-sided asymmetry in the upper region
    corecore