63 research outputs found
The Quantitative Research of Interaction between Key Parameters and the Effects on Mechanical Property in FDM
The central composite design (CCD) experiment is conducted to evaluate the interaction between parameters and the effect on mechanical property. The layer thickness, deposition velocity, and air gap are considered as the key factors. Three disparate levels of the parameters are utilized in the experiment. The experimental results suggest that all these parameters can affect the bonding degree of the filaments, which affects the final tensile strength of the specimen. A new numerical model is built to describe the cooling process of the fused filament, which shows a perfect coherence with the practical temperature file of filament. It reveals what the forming mechanism of the bonding between filaments is and how these parameters act on final tensile strength of the specimen of this way from temperature. It is concluded that the parameters are not working alone; in fact they all contribute to determining the mechanical property, while the air gap plays the predominant role in determining the final tensile strength, followed by layer thickness as the next predominant factor, and the effect of deposition velocity is the weakest factor
Variation of Korotkoff stethoscope sounds during blood pressure measurement: Analysis using a convolutional neural network
Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heart beats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as ‘Korotkoff’. A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analysed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analysed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analysed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0.001). Results reached stability for SBP (97.8%, at 6th beats below SBP) and DBP (98.1%, at 6th beat above DBP) with no significant differences between adjacent beats (SBP P = 0.74; DBP P = 0.88). There were no significant differences at high cuff pressures, but at low pressures close to diastole there was a small difference (3.3%, P = 0.02). In addition, greater within subject variability was observed at SBP (21.4%) and DBP (28.9%), with a significant difference between both (P < 0.02). In conclusion, this study has demonstrated that Korotkoff sounds can be consistently identified during the period below SBP and above DBP, but that at systole and diastole there can be substantial variations that are associated with high variation in the three repeat measurements in each subject
Multimodal Neuroimaging Predictors for Cognitive Performance Using Structured Sparse Learning
poster abstractRegression models have been widely studied to investigate whether multimodal neuroimaging measures can be used as effective biomarkers for predicting cognitive outcomes in the study of Alzheimer's Disease (AD). Most existing models overlook the interrelated structures either within neuroimaging measures or between cognitive outcomes, and thus may have limited power to yield optimal solutions. To address this issue, we propose to incorporate an L21 norm and/or a group L21 norm (G21 norm) in the regression models. Using ADNI-1 and ADNI-GO/2 data, we apply these models to examining the ability of structural MRI and AV-45 PET scans for predicting cognitive measures including ADAS and RAVLT scores. We focus our analyses on the participants with mild cognitive impairment (MCI), a prodromal stage of AD, in order to identify useful patterns for early detection. Compared with traditional linear and ridge regression methods, these new models not only demonstrate superior and more stable predictive performances, but also identify a small set of imaging markers that are biologically meaningful
ECG Classification Using Wavelet Packet Entropy and Random Forests
The electrocardiogram (ECG) is one of the most important techniques for heart disease diagnosis. Many traditional methodologies of feature extraction and classification have been widely applied to ECG analysis. However, the effectiveness and efficiency of such methodologies remain to be improved, and much existing research did not consider the separation of training and testing samples from the same set of patients (so called inter-patient scheme). To cope with these issues, in this paper, we propose a method to classify ECG signals using wavelet packet entropy (WPE) and random forests (RF) following the Association for the Advancement of Medical Instrumentation (AAMI) recommendations and the inter-patient scheme. Specifically, we firstly decompose the ECG signals by wavelet packet decomposition (WPD), and then calculate entropy from the decomposed coefficients as representative features, and finally use RF to build an ECG classification model. To the best of our knowledge, it is the first time that WPE and RF are used to classify ECG following the AAMI recommendations and the inter-patient scheme. Extensive experiments are conducted on the publicly available MIT–BIH Arrhythmia database and influence of mother wavelets and level of decomposition for WPD, type of entropy and the number of base learners in RF on the performance are also discussed. The experimental results are superior to those by several state-of-the-art competing methods, showing that WPE and RF is promising for ECG classification
The stability of minimal solution sets for set optimization problems via improvement sets
Abstract In this paper we investigate the stability of solution sets for set optimization problems via improvement sets. Some sufficient conditions for the upper semicontinuity, lower semicontinuity, and compactness of E-minimal solution mappings are given for parametric set optimization under some suitable conditions. We also give some examples to illustrate our main results
Hyperchaotic Image Encryption Based on Multiple Bit Permutation and Diffusion
Image security is a hot topic in the era of Internet and big data. Hyperchaotic image encryption, which can effectively prevent unauthorized users from accessing image content, has become more and more popular in the community of image security. In general, such approaches conduct encryption on pixel-level, bit-level, DNA-level data or their combinations, lacking diversity of processed data levels and limiting security. This paper proposes a novel hyperchaotic image encryption scheme via multiple bit permutation and diffusion, namely MBPD, to cope with this issue. Specifically, a four-dimensional hyperchaotic system with three positive Lyapunov exponents is firstly proposed. Second, a hyperchaotic sequence is generated from the proposed hyperchaotic system for consequent encryption operations. Third, multiple bit permutation and diffusion (permutation and/or diffusion can be conducted with 1–8 or more bits) determined by the hyperchaotic sequence is designed. Finally, the proposed MBPD is applied to image encryption. We conduct extensive experiments on a couple of public test images to validate the proposed MBPD. The results verify that the MBPD can effectively resist different types of attacks and has better performance than the compared popular encryption methods
Robust Face Recognition via Block Sparse Bayesian Learning
Face recognition (FR) is an important task in pattern recognition and computer vision. Sparse representation (SR) has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study, we use block sparse Bayesian learning (BSBL) to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model-based algorithms. Experimental results on the Extended Yale B, the AR, and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases
A Novel Image Encryption Approach Based on a Hyperchaotic System, Pixel-Level Filtering with Variable Kernels, and DNA-Level Diffusion
With the rapid growth of image transmission and storage, image security has become a hot topic in the community of information security. Image encryption is a direct way to ensure image security. This paper presents a novel approach that uses a hyperchaotic system, Pixel-level Filtering with kernels of variable shapes and parameters, and DNA-level Diffusion, so-called PFDD, for image encryption. The PFDD totally consists of four stages. First, a hyperchaotic system is applied to generating hyperchaotic sequences for the purpose of subsequent operations. Second, dynamic filtering is performed on pixels to change the pixel values. To increase the diversity of filtering, kernels with variable shapes and parameters determined by the hyperchaotic sequences are used. Third, a global bit-level scrambling is conducted to change the values and positions of pixels simultaneously. The bit stream is then encoded into DNA-level data. Finally, a novel DNA-level diffusion scheme is proposed to further change the image values. We tested the proposed PFDD with 15 publicly accessible images with different sizes, and the results demonstrate that the PFDD is capable of achieving state-of-the-art results in terms of the evaluation criteria, indicating that the PFDD is very effective for image encryption
Hyper-Chaotic Color Image Encryption Based on Transformed Zigzag Diffusion and RNA Operation
With increasing utilization of digital multimedia and the Internet, protection on this digital information from cracks has become a hot topic in the communication field. As a path for protecting digital visual information, image encryption plays a crucial role in modern society. In this paper, a novel six-dimensional (6D) hyper-chaotic encryption scheme with three-dimensional (3D) transformed Zigzag diffusion and RNA operation (HCZRNA) is proposed for color images. For this HCZRNA scheme, four phases are included. First, three pseudo-random matrices are generated from the 6D hyper-chaotic system. Second, plaintext color image would be permuted by using the first pseudo-random matrix to convert to an initial cipher image. Third, the initial cipher image is placed on cube for 3D transformed Zigzag diffusion using the second pseudo-random matrix. Finally, the diffused image is converted to RNA codons array and updated through RNA codons tables, which are generated by codons and the third pseudo-random matrix. After four phases, a cipher image is obtained, and the experimental results show that HCZRNA has high resistance against well-known attacks and it is superior to other schemes
- …