2,374 research outputs found

    Impact of Spontaneous Fermentation and Inoculum with Natural Whey Starter on Peptidomic Profile and Biological Activities of Cheese Whey: A Comparative Study

    Get PDF
    Fermentation is a promising solution to valorize cheese whey, the main by-product of the dairy industry. In Parmigiano Reggiano cheese production, natural whey starter (NWS), an undefined community of thermophilic lactic acid bacteria, is obtained from the previous day residual whey through incubation at gradually decreasing temperature after curd cooking. The aim of this study was to investigate the effect of fermentation regime (spontaneous (S) and NWS-inoculated (I-NWS)) on biofunctionalities and release of bioactive peptides during whey fermentation. In S and I-NWS trials proteolysis reached a peak after 24 h, which corresponded to the drop out in pH and the maximum increase in lactic acid. Biological activities increased as a function of fermentation time. NWS inoculum positively affected antioxidant activity, whilst S overcame I-NWS in angiotensin-converting enzyme (ACE) and DPP-IV (dipeptidyl peptidase IV) inhibitory activities. Peptidomics revealed more than 400 peptides, mainly derived from β-casein, κ-casein, and α-lactalbumin. Among them, 49 were bioactive and 21 were ACE-inhibitors. Semi-quantitative analysis strongly correlated ACE-inhibitory activity with the sum of the peptide abundance of ACE-inhibitory peptides. In both samples, lactotripeptide isoleucine-proline-proline (IPP) was higher than valine-proline-proline (VPP), with the highest content in S after 24 h of fermentation. In conclusion, we demonstrated the ability of whey endogenous microbiota and NWS to extensively hydrolyze whey proteins, promoting the release of bioactive peptides and improving protein digestibility

    Spiking Neural Networks for event-based action recognition: A new task to understand their advantage

    Full text link
    Spiking Neural Networks (SNN) are characterised by their unique temporal dynamics, but the properties and advantages of such computations are still not well understood. In order to provide answers, in this work we demonstrate how Spiking neurons can enable temporal feature extraction in feed-forward neural networks without the need for recurrent synapses, showing how their bio-inspired computing principles can be successfully exploited beyond energy efficiency gains and evidencing their differences with respect to conventional neurons. This is demonstrated by proposing a new task, DVS-Gesture-Chain (DVS-GC), which allows, for the first time, to evaluate the perception of temporal dependencies in a real event-based action recognition dataset. Our study proves how the widely used DVS Gesture benchmark could be solved by networks without temporal feature extraction, unlike the new DVS-GC which demands an understanding of the ordering of the events. Furthermore, this setup allowed us to unveil the role of the leakage rate in spiking neurons for temporal processing tasks and demonstrated the benefits of "hard reset" mechanisms. Additionally, we also show how time-dependent weights and normalization can lead to understanding order by means of temporal attention.Comment: New article superseding the one in previous version

    Simple and complex spiking neurons: perspectives and analysis in a simple STDP scenario

    Full text link
    Spiking neural networks (SNNs) are largely inspired by biology and neuroscience and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I&F) models are often adopted, with the simple Leaky I&F (LIF) being the most used. The reason for adopting such models is their efficiency and/or biological plausibility. Nevertheless, rigorous justification for adopting LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers various neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I&F neuron models, namely the LIF, the Quadratic I&F (QIF) and the Exponential I&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer Spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the whole system's performance. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available

    Frameworks for SNNs: a Review of Data Science-oriented Software and an Expansion of SpykeTorch

    Full text link
    Developing effective learning systems for Machine Learning (ML) applications in the Neuromorphic (NM) field requires extensive experimentation and simulation. Software frameworks aid and ease this process by providing a set of ready-to-use tools that researchers can leverage. The recent interest in NM technology has seen the development of several new frameworks that do this, and that add up to the panorama of already existing libraries that belong to neuroscience fields. This work reviews 9 frameworks for the development of Spiking Neural Networks (SNNs) that are specifically oriented towards data science applications. We emphasize the availability of spiking neuron models and learning rules to more easily direct decisions on the most suitable frameworks to carry out different types of research. Furthermore, we present an extension to the SpykeTorch framework that gives users access to a much broader choice of neuron models to embed in SNNs and make the code publicly available

    Cultivable microbial diversity, peptide profiles, and bio-functional properties in Parmigiano Reggiano cheese

    Get PDF
    IntroductionLactic acid bacteria (LAB) communities shape the sensorial and functional properties of artisanal hard-cooked and long-ripened cheeses made with raw bovine milk like Parmigiano Reggiano (PR) cheese. While patterns of microbial evolution have been well studied in PR cheese, there is a lack of information about how this microbial diversity affects the metabolic and functional properties of PR cheese.MethodsTo fill this information gap, we characterized the cultivable fraction of natural whey starter (NWS) and PR cheeses at different ripening times, both at the species and strain level, and investigated the possible correlation between microbial composition and the evolution of peptide profiles over cheese ripening.Results and discussionThe results showed that NWS was a complex community of several biotypes belonging to a few species, namely, Streptococcus thermophilus, Lactobacillus helveticus, and Lactobacillus delbrueckii subsp. lactis. A new species-specific PCR assay was successful in discriminating the cheese-associated species Lacticaseibacillus casei, Lacticaseibacillus paracasei, Lacticaseibacillus rhamnosus, and Lacticaseibacillus zeae. Based on the resolved patterns of species and biotype distribution, Lcb. paracasei and Lcb. zeae were most frequently isolated after 24 and 30 months of ripening, while the number of biotypes was inversely related to the ripening time. Peptidomics analysis revealed more than 520 peptides in cheese samples. To the best of our knowledge, this is the most comprehensive survey of peptides in PR cheese. Most of them were from β-caseins, which represent the best substrate for LAB cell-envelope proteases. The abundance of peptides from β-casein 38–88 region continuously increased during ripening. Remarkably, this region contains precursors for the anti-hypertensive lactotripeptides VPP and IPP, as well as for β-casomorphins. We found that the ripening time strongly affects bioactive peptide profiles and that the occurrence of Lcb. zeae species is positively linked to the incidence of eight anti-hypertensive peptides. This result highlighted how the presence of specific LAB species is likely a pivotal factor in determining PR functional properties

    Simple and complex spiking neurons : perspectives and analysis in a simple STDP scenario

    Get PDF
    Spiking neural networks (SNNs) are largely inspired by biology and neuroscience, and leverage ideas and theories to create fast and efficient learning systems. Spiking neuron models are adopted as core processing units in neuromorphic systems because they enable event-based processing. The integrate-and-fire (I\&F) models are often adopted as considered more suitable, with the simple Leaky I\&F (LIF) being the most used. The reason for adopting such models is their efficiency or biological plausibility. Nevertheless, rigorous justification for the adoption of LIF over other neuron models for use in artificial learning systems has not yet been studied. This work considers a variety of neuron models in the literature and then selects computational neuron models that are single-variable, efficient, and display different types of complexities. From this selection, we make a comparative study of three simple I\&F neuron models, namely the LIF, the Quadratic I\&F (QIF) and the Exponential I\&F (EIF), to understand whether the use of more complex models increases the performance of the system and whether the choice of a neuron model can be directed by the task to be completed. Neuron models are tested within an SNN trained with Spike-Timing Dependent Plasticity (STDP) on a classification task on the N-MNIST and DVS Gestures datasets. Experimental results reveal that more complex neurons manifest the same ability as simpler ones to achieve high levels of accuracy on a simple dataset (N-MNIST), albeit requiring comparably more hyper-parameter tuning. However, when the data possess richer spatio-temporal features, the QIF and EIF neuron models steadily achieve better results. This suggests that accurately selecting the model based on the richness of the feature spectrum of the data could improve the performance of the whole system. Finally, the code implementing the spiking neurons in the SpykeTorch framework is made publicly available

    Time series forecasting via derivative spike encoding and bespoke loss functions for SNNs

    Get PDF
    The potential of neuromorphic (NM) solutions often lies in their low-SWaP (Size, Weight, and Power) capabilities, which often drive their application to domains that could benefit from this. Nevertheless, spiking neural networks (SNNs), with their inherent time-based nature, present an attractive alternative also for areas where data features are present in the time dimension, such as time series forecasting. Time series data, characterized by seasonality and trends, can benefit from the unique processing capabilities of SNNs, which offer a novel approach for this type of task. Additionally, time series data can serve as a benchmark for evaluating SNN performance, providing a valuable alternative to traditional datasets. However, the challenge lies in the real-valued nature of time series data, which is not inherently suited for SNN processing. In this work, we propose a novel spike-encoding mechanism and two loss functions to address this challenge. Our encoding system, inspired by NM event-based sensors, converts the derivative of a signal into spikes, enhancing interoperability with the NM technology and also making the data suitable for SNN processing. Our loss functions then optimize the learning of subsequent spikes by the SNN. We train a simple SNN using SLAYER as a learning rule and conduct experiments using two electricity load forecasting datasets. Our results demonstrate that SNNs can effectively learn from encoded data, and our proposed DecodingLoss function consistently outperforms SLAYER’s SpikeTime loss function. This underscores the potential of SNNs for time series forecasting and sets the stage for further research in this promising area of research
    corecore