65 research outputs found

    Mining of nutritional ingredients in food for disease analysis

    Get PDF
    Suitable nutritional diets have been widely recognized as important measures to prevent and control non-communicable diseases (NCDs). However, there is little research on nutritional ingredients in food now, which are beneficial to the rehabilitation of NCDs. In this paper, we profoundly analyzed the relationship between nutritional ingredients and diseases by using data mining methods. First, more than 7,000 diseases were obtained and we collected the recommended food and taboo food for each disease. Then, referring to the China Food Nutrition, we used noise-intensity and information entropy to find out which nutritional ingredients can exert positive effects on diseases. Finally, we proposed an improved algorithm named CVNDA_Red based on rough sets to select the corresponding core ingredients from the positive nutritional ingredients. To the best of our knowledge, this is the first study to discuss the relationship between nutritional ingredients in food and diseases through data mining based on rough set theory in China. The experiments on real-life data show that our method based on data mining improves the performance compared with the traditional statistical approach, with the precision of 1.682. Additionally, for some common diseases such as Diabetes, Hypertension and Heart disease, our work is able to identify correctly the first two or three nutritional ingredients in food that can benefit the rehabilitation of those diseases. These experimental results demonstrate the effectiveness of applying data mining in selecting of nutritional ingredients in food for disease analysis

    Heterogeneous Data Alignment for Cross-Media Computing

    Get PDF
    ABSTRACT Massive data sets are generated everyday, and knowledge spreading trends to involve diverse media types and information sources. The new situation emerges some new challenging research problems: (1) How to bridge the heterogeneity and mine the shared information among cross-view representations? (2) How to build the semantic association among heterogeneous information objects. (3) How to fully explore the complementary information underlying heterogeneous information objects to cooperatively make decision. In fact, the core of these problems is to align heterogeneous data. In this paper, we first give a short introduction about heterogeneous data alignment. And then, two ongoing works in our group about heterogeneous data alignment are discussed, i.e., consistent pattern mining and modalitydependent cross-media retrieval. Finally, we conclude this paper and discuss some potential applications of heterogeneous data alignment

    A novel data-driven robust framework based on machine learning and knowledge graph for disease classification

    Get PDF
    Abstract(#br)As Noncommunicable Diseases (NCDs) are affected or controlled by diverse factors such as age, regionalism, timeliness or seasonality, they are always challenging to be treated accurately, which has impacted on daily life and work of patients. Unfortunately, although a number of researchers have already made some achievements (including clinical or even computer-based) on certain diseases, current situation is eager to be improved via computer technologies such as data mining and Deep Learning. In addition, the progress of NCD research has been hampered by privacy of health and medical data. In this paper, a hierarchical idea has been proposed to study the effects of various factors on diseases, and a data-driven framework named d-DC with good extensibility is presented. d-DC is able to classify the disease according to the occupation on the premise where the disease is occurring in a certain region. During collecting data, we used a combination of personal or family medical records and traditional methods to build a data acquisition model. Not only can it realize automatic collection and replenishment of data, but it can also effectively tackle the cold start problem of the model with relatively few data effectively. The diversity of information gathering includes structured data and unstructured data (such as plain texts, images or videos), which contributes to improve the classification accuracy and new knowledge acquisition. Apart from adopting machine learning methods, d-DC has employed knowledge graph (KG) to classify diseases for the first time. The vectorization of medical texts by using knowledge embedding is a novel consideration in the classification of diseases. When results are singular, the medical expert system was proposed to address inconsistencies through knowledge bases or online experts. The results of d-DC are displayed by using a combination of KG and traditional methods, which intuitively provides a reasonable interpretation to the results (highly descriptive). Experiments show that d-DC achieved the improved accuracy than the other previous methods. Especially, a fusion method called RKRE based on both ResNet and the expert system attained an average correct proportion of 86.95%, which is a good feasibility study in the field of disease classification

    LMS-SM3 and HSS-SM3: Instantiating Hash-based Post-Quantum Signature Schemes with SM3

    Get PDF
    We instantiate the hash-based post-quantum stateful signature schemes LMS and HSS described in RFC 8554 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test

    XMSS-SM3 and MT-XMSS-SM3: Instantiating Extended Merkle Signature Schemes with SM3

    Get PDF
    We instantiate the hash-based post-quantum stateful signature schemes XMSS and its multi-tree version described in RFC 8391 and NIST SP 800-208 with SM3, and report on the results of the preliminary performance test

    Sciences for The 2.5-meter Wide Field Survey Telescope (WFST)

    Full text link
    The Wide Field Survey Telescope (WFST) is a dedicated photometric survey facility under construction jointly by the University of Science and Technology of China and Purple Mountain Observatory. It is equipped with a primary mirror of 2.5m in diameter, an active optical system, and a mosaic CCD camera of 0.73 Gpix on the main focus plane to achieve high-quality imaging over a field of view of 6.5 square degrees. The installation of WFST in the Lenghu observing site is planned to happen in the summer of 2023, and the operation is scheduled to commence within three months afterward. WFST will scan the northern sky in four optical bands (u, g, r, and i) at cadences from hourly/daily to semi-weekly in the deep high-cadence survey (DHS) and the wide field survey (WFS) programs, respectively. WFS reaches a depth of 22.27, 23.32, 22.84, and 22.31 in AB magnitudes in a nominal 30-second exposure in the four bands during a photometric night, respectively, enabling us to search tremendous amount of transients in the low-z universe and systematically investigate the variability of Galactic and extragalactic objects. Intranight 90s exposures as deep as 23 and 24 mag in u and g bands via DHS provide a unique opportunity to facilitate explorations of energetic transients in demand for high sensitivity, including the electromagnetic counterparts of gravitational-wave events detected by the second/third-generation GW detectors, supernovae within a few hours of their explosions, tidal disruption events and luminous fast optical transients even beyond a redshift of 1. Meanwhile, the final 6-year co-added images, anticipated to reach g about 25.5 mag in WFS or even deeper by 1.5 mag in DHS, will be of significant value to general Galactic and extragalactic sciences. The highly uniform legacy surveys of WFST will also serve as an indispensable complement to those of LSST which monitors the southern sky.Comment: 46 pages, submitted to SCMP

    A Statistical Model of Cleavage Fracture Toughness of Ferritic Steel DIN 22NiMoCr37 at Different Temperatures

    No full text
    It is a conventional practice to adopt Weibull statistics with a modulus of 4 for characterizing the statistical distribution of cleavage fracture toughness of ferritic steels, albeit based on a rather weak physical justification. In this study, a statistical model for cleavage fracture toughness of ferritic steels is proposed according to a new local approach model. The model suggests that there exists a unique correlation of the cumulative failure probability, fracture toughness and yield strength. This correlation is validated by the Euro fracture toughness dataset for 1CT specimens at four different temperatures, which deviates from the Weibull statistical model with a modulus of four
    • …
    corecore