4 research outputs found

    Serum lipids, apoproteins and nutrient intake in rural Cretan boys consuming high-olive-oil diets

    Get PDF
    A high intake of olive oil has produced high levels of high-density and low levels of low-density lipoprotein cholesterol in short-term dietary trials. To investigate long-term effects of olive oil we have studied the diet and serum lipids of boys in Crete, where a high olive oil consumption is the norm. Seventy-six healthy rural Cretan boys aged 7–9 years were studied. The diet was assessed by a 2-day dietary recall. Blood was collected according to a standardized protocol and sera were analyzed in a rigidly standardized laboratory. The mean daily intake of energy was 11.0 MJ (2629 kcal). The intake of fat (45.0% of energy) and oleic acid (27.2% of energy) was high, and that of saturated fat low (10.0% of energy), reflecting a high consumption of olive oil. The high consumption of olive oil was confirmed by a high proportion of oleic-acid (27.1 %) in serum cholesteryl fatty acids. Mean concentration of serum total cholesterol was 4.42 mmol 1−1 (171 mg dl−1 ), of HDL-cholesterol 1.40 mmol 1−1 (54 mg dl−1), of serum triglycerides 0.59 mmol I−1 (52 mg dl−1 ), of apo-A1 1210 mg 1−1 and of LDL apo-B 798 mg 1−1. The body mass index of the Cretan boys (18.2 kg m−2) was on average 2 kg m−2 higher than that of boys from other countries. Contrary to our expectation, the Cretan boys did not show a more favourable serum lipoprotein pattern than boys from more westernized countries studied previously using the same protocol. Our hypothesis that a typical, olive-oil-rich Cretan diet causes a relatively high HDL- to total cholesterol ratio is not supported by the present findings

    Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs

    No full text
    Combinatorial optimization (CO) problems are notoriously challenging for neural networks, especially in the absence of labeled instances. This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality. Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets. Crucially, we show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem. The probabilistic proof of existence is then derandomized to decode the desired solutions. We demonstrate the efficacy of this approach to obtain valid solutions to the maximum clique problem and to perform local graph clustering. Our method achieves competitive results on both real datasets and synthetic hard instances

    Partition and Code: learning how to compress graphs

    Full text link
    Can we use machine learning to compress graph data? The absence of ordering in graphs poses a significant challenge to conventional compression algorithms, limiting their attainable gains as well as their ability to discover relevant patterns. On the other hand, most graph compression approaches rely on domain-dependent handcrafted representations and cannot adapt to different underlying graph distributions. This work aims to establish the necessary principles a lossless graph compression method should follow to approach the entropy storage lower bound. Instead of making rigid assumptions about the graph distribution, we formulate the compressor as a probabilistic model that can be learned from data and generalise to unseen instances. Our "Partition and Code" framework entails three steps: first, a partitioning algorithm decomposes the graph into subgraphs, then these are mapped to the elements of a small dictionary on which we learn a probability distribution, and finally, an entropy encoder translates the representation into bits. All the components (partitioning, dictionary and distribution) are parametric and can be trained with gradient descent. We theoretically compare the compression quality of several graph encodings and prove, under mild conditions, that PnC achieves compression gains that grow either linearly or quadratically with the number of vertices. Empirically, PnC yields significant compression improvements on diverse real-world networks.Comment: Published at NeurIPS 202
    corecore