39 research outputs found

    Quantum inspired approach for early classification of time series

    Get PDF
    Is it possible to apply some fundamental principles of quantum-computing to time series classi\ufb01cation algorithms? This is the initial spark that became the research question I decided to chase at the very beginning of my PhD studies. The idea came accidentally after reading a note on the ability of entanglement to express the correlation between two particles, even far away from each other. The test problem was also at hand because I was investigating on possible algorithms for real time bot detection, a challenging problem at present day, by means of statistical approaches for sequential classi\ufb01cation. The quantum inspired algorithm presented in this thesis stemmed as an evolution of the statistical method mentioned above: it is a novel approach to address binary and multinomial classi\ufb01cation of an incoming data stream, inspired by the principles of Quantum Computing, in order to ensure the shortest decision time with high accuracy. The proposed approach exploits the analogy between the intrinsic correlation of two or more particles and the dependence of each item in a data stream with the preceding ones. Starting from the a-posteriori probability of each item to belong to a particular class, we can assign a Qubit state representing a combination of the aforesaid probabilities for all available observations of the time series. By leveraging superposition and entanglement on subsequences of growing length, it is possible to devise a measure of membership to each class, thus enabling the system to take a reliable decision when a suf\ufb01cient level of con\ufb01dence is met. In order to provide an extensive and thorough analysis of the problem, a well-\ufb01tting approach for bot detection was replicated on our dataset and later compared with the statistical algorithm to determine the best option. The winner was subsequently examined against the new quantum-inspired proposal, showing the superior capability of the latter in both binary and multinomial classi\ufb01cation of data streams. The validation of quantum-inspired approach in a synthetically generated use case, completes the research framework and opens new perspectives in on-the-\ufb02y time series classi\ufb01cation, that we have just started to explore. Just to name a few ones, the algorithm is currently being tested with encouraging results in predictive maintenance and prognostics for automotive, in collaboration with University of Bradford (UK), and in action recognition from video streams

    The promises of large language models for protein design and modeling.

    Get PDF
    The recent breakthroughs of Large Language Models (LLMs) in the context of natural language processing have opened the way to significant advances in protein research. Indeed, the relationships between human natural language and the language of proteins invite the application and adaptation of LLMs to protein modelling and design. Considering the impressive results of GPT-4 and other recently developed LLMs in processing, generating and translating human languages, we anticipate analogous results with the language of proteins. Indeed, protein language models have been already trained to accurately predict protein properties, generate novel functionally characterized proteins, achieving state-of-the-art results. In this paper we discuss the promises and the open challenges raised by this novel and exciting research area, and we propose our perspective on how LLMs will affect protein modeling and design

    The Role of Adsorption and pH of the Mobile Phase on the Chromatographic Behavior of a Therapeutic Peptide

    Get PDF
    The impact of two different stationary phases and ion-pair reagents on the retention behavior of glucagon, a therapeutic peptide consisting of 29 amino acidic residues, has been investigated under reversed-phase elution conditions. Retention of glucagon was investigated under isocratic conditions by varying the fraction of the organic modifier in the range of 28–38% (v/v). The two stationary phases have been characterized in terms of excess adsorption isotherms to understand the preferential adsorption of eluent components on them. Results suggest that the ligand characteristics and the pH of the mobile phase play a pivotal role on retention

    The promises of large language models for protein design and modeling

    Get PDF
    The recent breakthroughs of Large Language Models (LLMs) in the context of natural language processing have opened the way to significant advances in protein research. Indeed, the relationships between human natural language and the “language of proteins” invite the application and adaptation of LLMs to protein modelling and design. Considering the impressive results of GPT-4 and other recently developed LLMs in processing, generating and translating human languages, we anticipate analogous results with the language of proteins. Indeed, protein language models have been already trained to accurately predict protein properties, generate novel functionally characterized proteins, achieving state-of-the-art results. In this paper we discuss the promises and the open challenges raised by this novel and exciting research area, and we propose our perspective on how LLMs will affect protein modeling and design

    Measuring clustering model complexity

    No full text
    The capacity of a clustering model can be defined as the ability to represent complex spatial data distributions. We introduce a method to quantify the capacity of an approximate spectral clustering model based on the eigenspectrum of the similarity matrix, providing the ability to measure capacity in a direct way and to estimate the most suitable model parameters. The method is tested on simple datasets and applied to a forged banknote classification problem

    An agent-based approach to simulate production, degradation, repair, replacement and preventive maintenance of manufacturing systems

    No full text
    The capacity to reconfigure production systems is considered fundamental for today\u2019s factories because of increasing demand for a high-level customer service (in terms of lead time and price). For this reason, the ability to simulate the productivity of a specific production line configuration can be a great assistance to the decision making process. This paper presents a multi-agent model used to simulate the failure behavior of a complex line production. This approach offers a decentralized alternative to designing decision-making system based on the simulation of distributed entities. The model is able to independently manage the variations in production rates and the tendency to fail, caused by the degradation of machines, repair actions, and replacements. In addition, random failures and preventive maintenance on the manufacturing system of a single product were considered. The blackboard system and the contract net protocol have inspired the coordination of the productivity of the different machines in the production line, to simulate the most feasible and balanced productivity for different states of the line

    An agent based approach to simulate production, degradation, repair, replacement and preventive maintenance of manufacturing systems

    No full text
    The capacity to be able to reconfigure the production systems is considered fundamental for the today manufacturing factories, due to the increasing request of a high performance of customer service level (lead time and price). For this reason, the simulation of the productivity of a specific configuration of a production line becomes a great assistance to the decision making process. This paper presents the potential of a multi-agent model used to simulate the failure behavior of a complex line production. This approach offers a decentralized alternative to design decision-making system based on the simulation of distributed entities. In particular, the presented model is able to independently manage the variations of the production rates and the failure prone, caused by the degradation of the machines, repair actions, and replacements. In addition, random failures, and preventive maintenance of a manufacturing system of a single product were considered. The blackboard system and the contract net protocol have inspired the manner to coordinate the productivity of the different machines in the production line, in order to simulate the highest feasible and most balanced productivity in different states of the line

    A Neural Network Approach to Find The Cumulative Failure Distribution: Modeling and Experimental Evidence

    No full text
    The failure prediction of components plays an increasingly important role in manufacturing. In this context, new models are proposed to better face this problem, and, among them, artificial neural networks are emerging as effective. A first approach to these networks can be complex, but in this paper, we will show that even simple networks can approximate the cumulative failure distribution well. The neural network approach results are often better than those based on the most useful probability distribution in reliability, the Weibull. In this paper, the performances of multilayer feedforward basic networks with different network configurations are tested, changing different parameters (e.g., the number of nodes, the learning rate, and the momentum). We used a set of different failure data of components taken from the real world, and we analyzed the accuracy of the approximation of the different neural networks compared with the least squares method based on the Weibull distribution. The results show that the networks can satisfactorily approximate the cumulative failure distribution, very often better than the least squares method, particularly in cases with a small number of available failure times
    corecore