31 research outputs found

    Effect of Functional Oligosaccharides and Ordinary Dietary Fiber on Intestinal Microbiota Diversity

    Get PDF
    Functional oligosaccharides, known as prebiotics, and ordinary dietary fiber have important roles in modulating the structure of intestinal microbiota. To investigate their effects on the intestinal microecosystem, three kinds of diets containing different prebiotics were used to feed mice for 3 weeks, as follows: GI (galacto-oligosaccharides and inulin), PF (polydextrose and insoluble dietary fiber from bran), and a GI/PF mixture (GI and PF, 1:1), 16S rRNA gene sequencing and metabolic analysis of mice feces were then conducted. Compared to the control group, the different prebiotics diets had varying effects on the structure and diversity of intestinal microbiota. GI and PF supplementation led to significant changes in intestinal microbiota, including an increase of Bacteroides and a decrease of Alloprevotella in the GI-fed, but those changes were opposite in PF fed group. Intriguing, in the GI/PF mixture-fed group, intestinal microbiota had the similar structure as the control groups, and flora diversity was upregulated. Fecal metabolic profiling showed that the diversity of intestinal microbiota was helpful in maintaining the stability of fecal metabolites. Our results showed that a single type of oligosaccharides or dietary fiber caused the reduction of bacteria species, and selectively promoted the growth of Bacteroides or Alloprevotella bacteria, resulting in an increase in diamine oxidase (DAO) and/or trimethylamine N-oxide (TMAO) values which was detrimental to health. However, the flora diversity was improved and the DAO values was significantly decreased when the addition of nutritionally balanced GI/PF mixture. Thus, we suggested that maintaining microbiota diversity and the abundance of dominant bacteria in the intestine is extremely important for the health, and that the addition of a combination of oligosaccharides and dietary fiber helps maintain the health of the intestinal microecosystem

    Communication Optimization for Customizable Domain-Specific Computing

    No full text
    This dissertation investigates the communication optimization for customizable domain-specific computing at different levels in a customizable heterogeneous platform (CHP) to improve the system performance and energy efficiency. Fabric-level optimization driven by emerging devices. Programmable fabrics (e.g., FPGAs) can be used to improve domain-specific computing by >10x in energy efficiency over CPUs since FPGAs can be customized to the application kernels in the target domain. But the programmable interconnects inside FPGAs occupy >50% of the FPGA area, delay and power. We propose a novel architecture of programmable interconnects based on resistive RAM (RRAM), a type of emerging device with high density and low power. We optimize the layout and the programming circuit of the new architecture. We also extend RRAM benefits to routing buffers. We observe the high defect rate in the emerging RRAM manufacturing and further develop a defect-aware communication mechanism. Conventional defect avoidance leaves a large portion of the chip in the new architecture unusable. So we propose new defect utilization methodologies by treating stuck-closed defects as shorting constraints in the routing of signals. We develop a scalable algorithm to perform timing-driven routing under these extra constraints and successfully suppress the impact of defects. Chip-level optimization driven by accelerator-centric architectures. A chip can also be customized to an application domain by integrating a sea of accelerators designed for the frequently used kernels in the domain. The design of interconnects among customized accelerators and shared resources (e.g., shared memories) is a serious challenge in chip design. Accelerators run 100x faster than CPUs and post a high data demand on the communication infrastructure. To address this challenge, we develop a novel design of interconnects between accelerators and shared memories and exploit several optimization opportunities that emerge in accelerator-rich computing platforms. Experiments show that our design outperforms prior work that was optimized for CPU cores or signal routing. Another design challenge lies in the data reuse optimization within an accelerator to minimize its off-chip accesses and on-chip buffer usage. Since the fully pipelined computation kernel consumes large amounts of data every clock cycle, and the data access pattern is the major difference among applications, existing accelerators use ad hoc data reuse schemes that are carefully tuned per application to fit the data demand. To reduce the engineering cost of accelerator-rich architectures, we develop a data reuse infrastructure that is generalized for the stencil computation domain and can be instantiated to the optimal design for any application in the domain. We demonstrate the robustness of our method over a set of real-life benchmarks. Server-level and cluster-level optimization driven by big data. In the era of big data, workloads can no longer fit into a single chip. Most data are stored in disks, and we can only load a small part of it into main memories during computation. Due to the low access speed of disks, our primary design goal becomes minimization of the data transfer between disks and the main memory. We select a popular big data application, convolutional neural network (CNN), as a case study. We analyze the linear algebraic properties of CNN, and propose algorithmic modifications to reduce the total computational workload and the disk access. Furthermore, when the application data become even larger, it needs to be distributed among a cluster of server nodes. This motivates us to develop an accelerator-centric computing cluster. We test two machine learning applications, logistic regression and artificial neural network (ANN), on our prototyping cluster and try to minimize the total data transfer incurred during the computation in this cluster. We select the distributed stochastic gradient descent (dSGD) as our training algorithm to eliminate the inter-node communication within a training iteration. We also deploy an in-memory cluster computing infrastructure, Spark, to eliminate the inter-node communication across training iterations. The baseline Spark only supports CPUs, and we develop a software layer to allow Spark tasks to offload their major computation to accelerators which are equipped by each server node. During the computation offloading, we group multiple tasks into a batch and transfer it to the target accelerator in one transaction to minimize the setup overhead of the data transfer between accelerators and host servers. We further realize accelerator data caching to eliminate the unnecessary data transfer of training data based on the properties of iterative machine learning applications

    FPGA-RR: A Novel FPGA Architecture with RRAM-Based Reconfigurable Interconnects

    No full text
    In this paper we introduce a novel FPGA architecture with RRAM-based reconfiguration (FPGA-RR). This architecture focuses on the redesign of programmable interconnects, the dominant part of FPGA. By renovating the routing structure of FPGA using RRAMs, the architecture achieves significant benefits concerning area, performance and energy consumption. The implementation of FPGA-RR can be realized by the existing CMOS-compatible RRAM fabrication process. A customized CAD flow is provided for FPGA-RR, with an advanced P&R tool named VPR-RR developed for FPGA-RR to deal with its novel routing structure. We use the flow to verify the benefits of area, performance and power of FPGA-RR over the 20 largest MCNC benchmark circuits. Results show that FPGA-RR achieves 6.82x area savings, 3.09x speedup and 4.33x energy savings

    Minimizing Computation in Convolutional Neural Networks

    No full text
    Abstract. Convolutional Neural Networks (CNNs) have been successfully used for many computer vision applications. It would be beneficial to these applications if the computational workload of CNNs could be reduced. In this work we analyze the linear algebraic properties of CNNs and propose an algorithmic modification to reduce their computational workload. An up to a 47% reduction can be achieved without any change in the image recognition results or the addition of any hardware accelerators

    Relationships between Risk Events, Personality Traits, and Risk Perception of Adolescent Athletes in Sports Training

    No full text
    Personality traits have close relationships with risky behaviors in various domains, including physical education, competition, and athletic training. It is yet little known about how trait personality dimensions associate with risk events and how vital factors, such as risk perception, could affect the happening of risk events in adolescent athletes. The primary purpose of this study is to investigate the prediction of risk events by regression analysis with dimensions of personality, risk perception and sports, relations between risk events, risk perception, and the facets of the personality dimensions via data collecting from 664 adolescent athletes aged 13–18 years (male 364, female 300). Secondary intent is to assess school-specific levels of training risks among sports schools, regular schools, and sports and education integrated schools. The results show that psychology events are the strongest predicted by personality traits, risk perception, and sports, followed by injury and nutrition. Emotionality has the most significant positive correlation with risk events, while other traits have a significant negative correlation with risk events, except agreeableness. The integration schools are more conducive to the healthy development of adolescent athletes’ personalities. Moreover, the research indicates that sports training can strengthen the development directions of different personality characteristics

    Prediction of Multiple Organ Failure Complicated by Moderately Severe or Severe Acute Pancreatitis Based on Machine Learning: A Multicenter Cohort Study

    No full text
    Background. Multiple organ failure (MOF) may lead to an increased mortality rate of moderately severe (MSAP) or severe acute pancreatitis (SAP). This study is aimed to use machine learning to predict the risk of MOF in the course of disease. Methods. Clinical and laboratory features with significant differences between patients with and without MOF were screened out by univariate analysis. Prediction models were developed for selected features through six machine learning methods. The models were internally validated with a five-fold cross-validation, and a series of optimal feature subsets were generated in corresponding models. A test set was used to evaluate the predictive performance of the six models. Results. 305 (68%) of 455 patients with MSAP or SAP developed MOF. Eighteen features with significant differences between the group with MOF and without it in the training and validation set were used for modeling. Interleukin-6 levels, creatinine levels, and the kinetic time were the three most important features in the optimal feature subsets selected by K-fold cross-validation. The adaptive boosting algorithm (AdaBoost) showed the best predictive performance with the highest AUC value (0.826; 95% confidence interval: 0.740 to 0.888). The sensitivity of AdaBoost (80.49%) and specificity of logistic regression analysis (93.33%) were the best scores among the six models in the test set. Conclusions. A predictive model of MOF complicated by MSAP or SAP was successfully developed based on machine learning. The predictive performance was evaluated by a test set, for which AdaBoost showed a satisfactory predictive performance. The study is registered with the China Clinical Trial Registry (Identifier: ChiCTR1800016079)
    corecore