35 research outputs found
pH-sensitive and thermosensitive hydrogels as stem-cell carriers for cardiac therapy.pdf
Stem-cell therapy has the potential to regeneratedamaged heart tissue after a heart attack. Injectable hydrogels may beused as stem-cell carriers to improve cell retention in the heart tissue.However, current hydrogels are not ideal to serve as cell carriers becausemost of them block blood vessels after solidification. In addition, thesehydrogels have a relatively slow gelation rate (typically >60 s), whichdoes not allow them to quickly solidify upon injection, so as toefficiently hold cells in the heart tissue. As a result, the hydrogels andcells are squeezed out of the tissue, leading to low cell retention. Toaddress these issues, we have developed hydrogels that can quicklysolidify at the pH of an infarcted heart (6−7) at 37 °C but cannotsolidify at the pH of blood (7.4) at 37 °C. These hydrogels are alsoclinically attractive because they can be injected through catheterscommonly used for minimally invasive surgeries. The hydrogels weresynthesized by free-radical polymerization of N-isopropylacrylamide, propylacrylic acid, hydroxyethyl methacrylate-cooligo(trimethylene carbonate), and methacrylate poly(ethylene oxide) methoxy ester. Hydrogel solutions were injectablethrough 0.2-mm-diameter catheters at pH 8.0 at 37 °C, and they can quickly form solid gels under pH 6.5 at 37 °C. All of thehydrogels showed pH-dependent degradation and mechanical properties with less mass loss and greater complex shear modulusat pH 6.5 than at pH 7.4. When cardiosphere-derived cells (CDCs) were encapsulated in the hydrogels, the cells were able tosurvive during a 7-day culture period. The surviving cells were differentiated into cardiac cells, as evidenced by the expression ofcardiac markers at both the gene and protein levels, such as cardiac troponin T, myosin heavy chain α, calcium channelCACNA1c, cardiac troponin I, and connexin 43. The gel integrity was found to largely affect CDC cardiac differentiation. Theseresults suggest that the developed dual-sensitive hydrogels may be promising carriers for cardiac cell therapy.</div
Thermosensitive and Highly Flexible Hydrogels Capable of stimulating cardiac differentiation
Cardiac stem cell therapy has been considered as apromising strategy for heart tissue regeneration. Yet achieving cardiacdifferentiation after stem cell transplantation remains challenging. Thiscompromises the efficacy of current stem cell therapy. Delivery of cellsusing matrices that stimulate the cardiac differentiation may improve thedegree of cardiac differentiation in the heart tissue. In this report, weinvestigated whether elastic modulus of highly flexible poly(N-isopropylamide)(PNIPAAm)-based hydrogels can be modulated to stimulate theencapsulated cardiosphere derived cells (CDCs) to differentiate intocardiac lineage under static condition and dynamic stretching that mimicsthe heart beating condition. We have developed hydrogels whose modulido not change under both dynamic stretching and static conditions for 14days. The hydrogels had the same chemical structure but different elasticmoduli (11, 21, and 40 kPa). CDCs were encapsulated into thesehydrogels and cultured under either native heart-mimicking dynamic stretching environment (12% strain and 1 Hz frequency) orstatic culture condition. CDCs were able to grow in all three hydrogels. The greatest growth was found in the hydrogel withelastic modulus of 40 kPa. The dynamic stretching condition stimulated CDC growth. The CDCs demonstrated elastic modulusdependentcardiac differentiation under both static and dynamic stretching conditions as evidenced by gene and proteinexpressions of cardiac markers such as MYH6, CACNA1c, cTnI, and Connexin 43. The highest differentiation was found in the40 kPa hydrogel. These results suggest that delivery of CDCs with the 40 kPa hydrogel may enhance cardiac differentiation in theinfarct hearts.</div
PESQ and STOI result for the VoiceBank dataset.
Long short-term memory (LSTM) has been effectively used to represent sequential data in recent years. However, LSTM still struggles with capturing the long-term temporal dependencies. In this paper, we propose an hourglass-shaped LSTM that is able to capture long-term temporal correlations by reducing the feature resolutions without data loss. We have used skip connections in non-adjacent layers to avoid gradient decay. In addition, an attention process is incorporated into skip connections to emphasize the essential spectral features and spectral regions. The proposed LSTM model is applied to speech enhancement and recognition applications. The proposed LSTM model uses no future information, resulting in a causal system suitable for real-time processing. The combined spectral feature sets are used to train the LSTM model for improved performance. Using the proposed model, the ideal ratio mask (IRM) is estimated as a training objective. The experimental evaluations using short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) have demonstrated that the proposed model with robust feature representation obtained higher speech intelligibility and perceptual quality. With the TIMIT, LibriSpeech, and VoiceBank datasets, the proposed model improved STOI by 16.21%, 16.41%, and 18.33% over noisy speech, whereas PESQ is improved by 31.1%, 32.9%, and 32%. In seen and unseen noisy situations, the proposed model outperformed existing deep neural networks (DNNs), including baseline LSTM, feedforward neural network (FDNN), convolutional neural network (CNN), and generative adversarial network (GAN). With the Kaldi toolkit for automated speech recognition (ASR), the proposed model significantly reduced the word error rates (WERs) and reached an average WER of 15.13% in noisy backgrounds.</div
The proposed LSTM Architecture.
Long short-term memory (LSTM) has been effectively used to represent sequential data in recent years. However, LSTM still struggles with capturing the long-term temporal dependencies. In this paper, we propose an hourglass-shaped LSTM that is able to capture long-term temporal correlations by reducing the feature resolutions without data loss. We have used skip connections in non-adjacent layers to avoid gradient decay. In addition, an attention process is incorporated into skip connections to emphasize the essential spectral features and spectral regions. The proposed LSTM model is applied to speech enhancement and recognition applications. The proposed LSTM model uses no future information, resulting in a causal system suitable for real-time processing. The combined spectral feature sets are used to train the LSTM model for improved performance. Using the proposed model, the ideal ratio mask (IRM) is estimated as a training objective. The experimental evaluations using short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) have demonstrated that the proposed model with robust feature representation obtained higher speech intelligibility and perceptual quality. With the TIMIT, LibriSpeech, and VoiceBank datasets, the proposed model improved STOI by 16.21%, 16.41%, and 18.33% over noisy speech, whereas PESQ is improved by 31.1%, 32.9%, and 32%. In seen and unseen noisy situations, the proposed model outperformed existing deep neural networks (DNNs), including baseline LSTM, feedforward neural network (FDNN), convolutional neural network (CNN), and generative adversarial network (GAN). With the Kaldi toolkit for automated speech recognition (ASR), the proposed model significantly reduced the word error rates (WERs) and reached an average WER of 15.13% in noisy backgrounds.</div
The STOI improvements (STOIi) in background noises.
The STOI improvements (STOIi) in background noises.</p
Visualization of spectral regions.
The underlying clean speech (a), the babble noise-contaminated noisy speech (b), speech processed by LSTM-IBM (c), speech processed by LSTMIRM (d), speech processed by the LSTM-AttenSkips-IBM (e), and speech processed by the LSTMAttenSkips-IRM (f).</p
STOI scores in seen noise sources for IRM training-target.
STOI scores in seen noise sources for IRM training-target.</p
The proposed speech enhancement.
Long short-term memory (LSTM) has been effectively used to represent sequential data in recent years. However, LSTM still struggles with capturing the long-term temporal dependencies. In this paper, we propose an hourglass-shaped LSTM that is able to capture long-term temporal correlations by reducing the feature resolutions without data loss. We have used skip connections in non-adjacent layers to avoid gradient decay. In addition, an attention process is incorporated into skip connections to emphasize the essential spectral features and spectral regions. The proposed LSTM model is applied to speech enhancement and recognition applications. The proposed LSTM model uses no future information, resulting in a causal system suitable for real-time processing. The combined spectral feature sets are used to train the LSTM model for improved performance. Using the proposed model, the ideal ratio mask (IRM) is estimated as a training objective. The experimental evaluations using short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) have demonstrated that the proposed model with robust feature representation obtained higher speech intelligibility and perceptual quality. With the TIMIT, LibriSpeech, and VoiceBank datasets, the proposed model improved STOI by 16.21%, 16.41%, and 18.33% over noisy speech, whereas PESQ is improved by 31.1%, 32.9%, and 32%. In seen and unseen noisy situations, the proposed model outperformed existing deep neural networks (DNNs), including baseline LSTM, feedforward neural network (FDNN), convolutional neural network (CNN), and generative adversarial network (GAN). With the Kaldi toolkit for automated speech recognition (ASR), the proposed model significantly reduced the word error rates (WERs) and reached an average WER of 15.13% in noisy backgrounds.</div
PESQ and STOI result for LibriSPeech dataset.
Long short-term memory (LSTM) has been effectively used to represent sequential data in recent years. However, LSTM still struggles with capturing the long-term temporal dependencies. In this paper, we propose an hourglass-shaped LSTM that is able to capture long-term temporal correlations by reducing the feature resolutions without data loss. We have used skip connections in non-adjacent layers to avoid gradient decay. In addition, an attention process is incorporated into skip connections to emphasize the essential spectral features and spectral regions. The proposed LSTM model is applied to speech enhancement and recognition applications. The proposed LSTM model uses no future information, resulting in a causal system suitable for real-time processing. The combined spectral feature sets are used to train the LSTM model for improved performance. Using the proposed model, the ideal ratio mask (IRM) is estimated as a training objective. The experimental evaluations using short-time objective intelligibility (STOI) and perceptual evaluation of speech quality (PESQ) have demonstrated that the proposed model with robust feature representation obtained higher speech intelligibility and perceptual quality. With the TIMIT, LibriSpeech, and VoiceBank datasets, the proposed model improved STOI by 16.21%, 16.41%, and 18.33% over noisy speech, whereas PESQ is improved by 31.1%, 32.9%, and 32%. In seen and unseen noisy situations, the proposed model outperformed existing deep neural networks (DNNs), including baseline LSTM, feedforward neural network (FDNN), convolutional neural network (CNN), and generative adversarial network (GAN). With the Kaldi toolkit for automated speech recognition (ASR), the proposed model significantly reduced the word error rates (WERs) and reached an average WER of 15.13% in noisy backgrounds.</div
Brief comparison in terms of features, training objective, DNN type, and loss function.
Brief comparison in terms of features, training objective, DNN type, and loss function.</p
