3,561 research outputs found

    Review: dual benefits, compositions, recommended storage, and intake duration of mother's milk

    Full text link
    Breastfeeding benefits both infants and mothers. Nutrients in mother's milk help protect infants from multiple diseases including infections, cancers, diabetes, gastrointestinal and respiratory diseases. We performed literature mining on 31,496 mother's-milk-related abstracts from PubMed and the results suggest the need for individualized mother's milk fortification and proper maternal supplementations (e.g. probiotics, vitamin D), because mother's milk compositions (e.g. fatty acids) vary according to maternal diet and responses to infection in mothers and/or infants. We review at details the variability observed in mother's milk compositions and its possible health effects in infants. We also review the effects of storage practices on mother's milk nutrients, recommended durations for mother's milk intake and the associated health benefits.Comment: 70 pages, 1 Figur

    Study of acoustic emission due to vaporisation of superheated droplets at higher pressure

    Full text link
    The bubble nucleation in superheated liquid can be controlled by adjusting the ambient pressure and temperature. At higher pressure the threshold energy for bubble nucleation increases and we have observed that the amplitude of the acoustic emission during vaporisation of superheated droplet decreases with increase in pressure at any given temperature. Other acoustic parameters such as the primary harmonic frequency and the decay time constant of the acoustic signal also decrease with increase in pressure. It is independent of the type of superheated liquid. The decrease in signal amplitude limits the detection of bubble nucleation at higher pressure. This effect is explained by the microbubble growth dynamics in superheated liquid.Comment: 11 pages, 9 figure

    Variational Inference of Disentangled Latent Concepts from Unlabeled Observations

    Full text link
    Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We also propose a new disentanglement metric which is better aligned with the qualitative disentanglement observed in the decoder's output. We empirically observe significant improvement over existing methods in terms of both disentanglement and data likelihood (reconstruction quality).Comment: ICLR 2018 Versio

    A Deep Learning Approach to Data-driven Parameterizations for Statistical Parametric Speech Synthesis

    Full text link
    Nearly all Statistical Parametric Speech Synthesizers today use Mel Cepstral coefficients as the vocal tract parameterization of the speech signal. Mel Cepstral coefficients were never intended to work in a parametric speech synthesis framework, but as yet, there has been little success in creating a better parameterization that is more suited to synthesis. In this paper, we use deep learning algorithms to investigate a data-driven parameterization technique that is designed for the specific requirements of synthesis. We create an invertible, low-dimensional, noise-robust encoding of the Mel Log Spectrum by training a tapered Stacked Denoising Autoencoder (SDA). This SDA is then unwrapped and used as the initialization for a Multi-Layer Perceptron (MLP). The MLP is fine-tuned by training it to reconstruct the input at the output layer. This MLP is then split down the middle to form encoding and decoding networks. These networks produce a parameterization of the Mel Log Spectrum that is intended to better fulfill the requirements of synthesis. Results are reported for experiments conducted using this resulting parameterization with the ClusterGen speech synthesizer

    Minimizing Inputs for Strong Structural Controllability

    Full text link
    The notion of strong structural controllability (s-controllability) allows for determining controllability properties of large linear time-invariant systems even when numerical values of the system parameters are not known a priori. The s-controllability guarantees controllability for all numerical realizations of the system parameters. We address the optimization problem of minimal cardinality input selection for s-controllability. Previous work shows that not only the optimization problem is NP-hard, but finding an approximate solution is also hard. We propose a randomized algorithm using the notion of zero forcing sets to obtain an optimal solution with high probability. We compare the performance of the proposed algorithm with a known heuristic [1] for synthetic random systems and five real-world networks, viz. IEEE 39-bus system, re-tweet network, protein-protein interaction network, US airport network, and a network of physicians. It is found that our algorithm performs much better than the heuristic in each of these cases

    Recurrent Neural Network Postfilters for Statistical Parametric Speech Synthesis

    Full text link
    In the last two years, there have been numerous papers that have looked into using Deep Neural Networks to replace the acoustic model in traditional statistical parametric speech synthesis. However, far less attention has been paid to approaches like DNN-based postfiltering where DNNs work in conjunction with traditional acoustic models. In this paper, we investigate the use of Recurrent Neural Networks as a potential postfilter for synthesis. We explore the possibility of replacing existing postfilters, as well as highlight the ease with which arbitrary new features can be added as input to the postfilter. We also tried a novel approach of jointly training the Classification And Regression Tree and the postfilter, rather than the traditional approach of training them independently

    A Learnable Distortion Correction Module for Modulation Recognition

    Full text link
    Modulation recognition is a challenging task while performing spectrum sensing in a cognitive radio setup. Recently, the use of deep convolutional neural networks (CNNs) has shown to achieve state-of-the-art accuracy for modulation recognition \cite{survey}. However, a wireless channel distorts the signal and CNNs are not explicitly designed to undo these artifacts. To improve the performance of CNN-based recognition schemes we propose a signal distortion correction module (CM) and show that this CM+CNN scheme achieves accuracy better than the existing schemes. The proposed CM is also based on a neural network that estimates the random carrier frequency and phase offset introduced by the channel and feeds it to a part that undoes this distortion right before CNN-based modulation recognition. Its output is differentiable with respect to its weights, which allows it to be trained end-to-end with the modulation recognition CNN based on the received signal. For supervision, only the modulation scheme label is used and the knowledge of true frequency or phase offset is not required

    Enhanced capillary pumping through evaporation assisted leaf-mimicking micropumps

    Full text link
    Pumping fluids without an aid of an external power source are desirable in a number of applications ranging from a cooling of microelectronic circuits to Micro Total Analysis Systems (micro-TAS). Although, several microfluidic pumps exist, yet passive micropumps demonstrate better energy efficiency while providing a better control over a pumping rate and its operation. The fluid pumping rate and their easy maneuverability are critical in some applications; therefore, in the current work, we have developed a leaf-mimicking micropump that demonstrated ~6 fold increase in a volumetric pumping rate as compared to the micropumps having a single capillary fluid delivery system. We have discussed a simple, scalable, yet inexpensive method to design and fabricate these leaf mimicking micopump. The microstructure of the micropumps were characterised through scanning electron microscopy and its pumping performance (volumetric pumping rate and pressure head sustainence) were assessed experimentally. The working principle of the proposed micropump is attributed to its structural elements; where branched-shaped microchannels deliver the fluid acting like veins of leaves while the connected microporous support resembles mesophyll cells matrix that instantaneously transfers the delivered fluid by a capillary action to multiple pores mimicking the stomata for evaporation. Such design of micropumps will enable an efficient delivery of the desired volume of a fluid to any 2D/3D micro/nanofluidic devices used in an engineering and biological applications.Comment: 19 pages including 6 figures and 1 table. The The work was presented as an oral presentation in the National conference on Convergence of Pharmaceutical Sciences and Biomedical technology, 2018, National Institute of Pharmaceutical Education and Research, Ahmadabad, India ( 26th March - 28th March, 2018

    Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference

    Full text link
    Semi-supervised learning methods using Generative Adversarial Networks (GANs) have shown promising empirical success recently. Most of these methods use a shared discriminator/classifier which discriminates real examples from fake while also predicting the class label. Motivated by the ability of the GANs generator to capture the data manifold well, we propose to estimate the tangent space to the data manifold using GANs and employ it to inject invariances into the classifier. In the process, we propose enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample. We observe considerable empirical gains in semi-supervised learning over baselines, particularly in the cases when the number of labeled examples is low. We also provide insights into how fake examples influence the semi-supervised learning procedure.Comment: NIPS 2017 accepted version, including appendi

    An Incremental Slicing Method for Functional Programs

    Full text link
    Several applications of slicing require a program to be sliced with respect to more than one slicing criterion. Program specialization, parallelization and cohesion measurement are examples of such applications. These applications can benefit from an incremental static slicing method in which a significant extent of the computations for slicing with respect to one criterion could be reused for another. In this paper, we consider the problem of incremental slicing of functional programs. We first present a non-incremental version of the slicing algorithm which does a polyvariant analysis 1 of functions. Since polyvariant analyses tend to be costly, we compute a compact context-independent summary of each function and then use this summary at the call sites of the function. The construction of the function summary is non-trivial and helps in the development of the incremental version. The incremental method, on the other hand, consists of a one-time pre-computation step that uses the non-incremental version to slice the program with respect to a fixed default slicing criterion and processes the results further to a canonical form. Presented with an actual slicing criterion, the incremental step involves a low-cost computation that uses the results of the pre-computation to obtain the slice. We have implemented a prototype of the slicer for a pure subset of Scheme, with pairs and lists as the only algebraic data types. Our experiments show that the incremental step of the slicer runs orders of magnitude faster than the non-incremental version. We have also proved the correctness of our incremental algorithm with respect to the non-incremental version
    • …
    corecore