15 research outputs found

    Multiscale modeling for the heterogeneous strength of biodegradable polyesters

    Get PDF
    A heterogeneous method of coupled multiscale strength model is presented in this paper for calculating the strength of medical polyesters such as polylactide (PLA), polyglycolide (PGA) and their copolymers during degradation by bulk erosion. The macroscopic device is discretized into an array of mesoscopic cells. A polymer chain is assumed to stay in one cell. With the polymer chain scission, it is found that the molecular weight, chain recrystallization induced by polymer chain scissions, and the cavities formation due to polymer cell collapse play different roles in the composition of mechanical strength of the polymer. Therefore, three types of strength phases were proposed to display the heterogeneous strength structures and to represent different strength contribution to polymers, which are amorphous phase, crystallinity phase and strength vacancy phase, respectively. The strength of the amorphous phase is related to the molecular weight; strength of the crystallinity phase is related to molecular weight and degree of crystallization; and the strength vacancy phase has negligible strength. The vacancy strength phase includes not only the cells with cavity status but also those with an amorphous status, but a molecular weight value below a threshold molecular weight. This heterogeneous strength model is coupled with micro chain scission, chain recrystallization and a macro oligomer diffusion equation to form a multiscale strength model which can simulate the strength phase evolution, cells status evolution, molecular weight, degree of crystallinity, weight loss and device strength during degradation. Different example cases are used to verify this model. The results demonstrate a good fit to experimental data

    Macro-F1 results comparison of seven widely used deep learning models under seven combinations of preprocessing methods (TQ-tax question dataset; TC-THUCNews).

    No full text
    Macro-F1 results comparison of seven widely used deep learning models under seven combinations of preprocessing methods (TQ-tax question dataset; TC-THUCNews).</p

    The evaluation results based on four simple machine learning models for two datasets.

    No full text
    The evaluation results based on four simple machine learning models for two datasets.</p

    Tax question dataset.

    No full text
    Text pre-processing is an important component of a Chinese text classification. At present, however, most of the studies on this topic focus on exploring the influence of preprocessing methods on a few text classification algorithms using English text. In this paper we experimentally compared fifteen commonly used classifiers on two Chinese datasets using three widely used Chinese preprocessing methods that include word segmentation, Chinese specific stop word removal, and Chinese specific symbol removal. We then explored the influence of the preprocessing methods on the final classifications according to various conditions such as classification evaluation, combination style, and classifier selection. Finally, we conducted a battery of various additional experiments, and found that most of the classifiers improved in performance after proper preprocessing was applied. Our general conclusion is that the systematic use of preprocessing methods can have a positive impact on the classification of Chinese short text, using classification evaluation such as macro-F1, combination of preprocessing methods such as word segmentation, Chinese specific stop word and symbol removal, and classifier selection such as machine and deep learning models. We find that the best macro-f1s for categorizing text for the two datasets are 92.13% and 91.99%, which represent improvements of 0.3% and 2%, respectively over the compared baselines.</div

    The evaluation results based on seven deep learning models for two datasets.

    No full text
    The evaluation results based on seven deep learning models for two datasets.</p

    Combinations of Chinese preprocessing methods.

    No full text
    Text pre-processing is an important component of a Chinese text classification. At present, however, most of the studies on this topic focus on exploring the influence of preprocessing methods on a few text classification algorithms using English text. In this paper we experimentally compared fifteen commonly used classifiers on two Chinese datasets using three widely used Chinese preprocessing methods that include word segmentation, Chinese specific stop word removal, and Chinese specific symbol removal. We then explored the influence of the preprocessing methods on the final classifications according to various conditions such as classification evaluation, combination style, and classifier selection. Finally, we conducted a battery of various additional experiments, and found that most of the classifiers improved in performance after proper preprocessing was applied. Our general conclusion is that the systematic use of preprocessing methods can have a positive impact on the classification of Chinese short text, using classification evaluation such as macro-F1, combination of preprocessing methods such as word segmentation, Chinese specific stop word and symbol removal, and classifier selection such as machine and deep learning models. We find that the best macro-f1s for categorizing text for the two datasets are 92.13% and 91.99%, which represent improvements of 0.3% and 2%, respectively over the compared baselines.</div

    Macro-F1 results comparison of four widely used pre-training learning models under seven combinations of preprocessing methods (TQ-tax question dataset; TC-THUCNews).

    No full text
    Macro-F1 results comparison of four widely used pre-training learning models under seven combinations of preprocessing methods (TQ-tax question dataset; TC-THUCNews).</p

    The evaluation results based on four pretraining language models for two datasets.

    No full text
    The evaluation results based on four pretraining language models for two datasets.</p

    Tax question dataset.

    No full text
    Text pre-processing is an important component of a Chinese text classification. At present, however, most of the studies on this topic focus on exploring the influence of preprocessing methods on a few text classification algorithms using English text. In this paper we experimentally compared fifteen commonly used classifiers on two Chinese datasets using three widely used Chinese preprocessing methods that include word segmentation, Chinese specific stop word removal, and Chinese specific symbol removal. We then explored the influence of the preprocessing methods on the final classifications according to various conditions such as classification evaluation, combination style, and classifier selection. Finally, we conducted a battery of various additional experiments, and found that most of the classifiers improved in performance after proper preprocessing was applied. Our general conclusion is that the systematic use of preprocessing methods can have a positive impact on the classification of Chinese short text, using classification evaluation such as macro-F1, combination of preprocessing methods such as word segmentation, Chinese specific stop word and symbol removal, and classifier selection such as machine and deep learning models. We find that the best macro-f1s for categorizing text for the two datasets are 92.13% and 91.99%, which represent improvements of 0.3% and 2%, respectively over the compared baselines.</div

    The workflow of the proposed approach.

    No full text
    Text pre-processing is an important component of a Chinese text classification. At present, however, most of the studies on this topic focus on exploring the influence of preprocessing methods on a few text classification algorithms using English text. In this paper we experimentally compared fifteen commonly used classifiers on two Chinese datasets using three widely used Chinese preprocessing methods that include word segmentation, Chinese specific stop word removal, and Chinese specific symbol removal. We then explored the influence of the preprocessing methods on the final classifications according to various conditions such as classification evaluation, combination style, and classifier selection. Finally, we conducted a battery of various additional experiments, and found that most of the classifiers improved in performance after proper preprocessing was applied. Our general conclusion is that the systematic use of preprocessing methods can have a positive impact on the classification of Chinese short text, using classification evaluation such as macro-F1, combination of preprocessing methods such as word segmentation, Chinese specific stop word and symbol removal, and classifier selection such as machine and deep learning models. We find that the best macro-f1s for categorizing text for the two datasets are 92.13% and 91.99%, which represent improvements of 0.3% and 2%, respectively over the compared baselines.</div
    corecore