475 research outputs found

    Process development of shaped magnesium- lithium castings

    Get PDF
    Casting process development for ternary magnesium- lithium-silicon allo

    In vivo and in vitro role of cholecystokinin in nitric oxide

    Get PDF

    Analysis of a consensus protocol for extending consistent subchains on the bitcoin blockchain

    Get PDF
    Currently, an increasing number of third-party applications exploit the Bitcoin blockchain to store tamper-proof records of their executions, immutably. For this purpose, they leverage the few extra bytes available for encoding custom metadata in Bitcoin transactions. A sequence of records of the same application can thus be abstracted as a stand-alone subchain inside the Bitcoin blockchain. However, several existing approaches do not make any assumptions about the consistency of their subchains, either (i) neglecting the possibility that this sequence of messages can be altered, mainly due to unhandled concurrency, network malfunctions, application bugs, or malicious users, or (ii) giving weak guarantees about their security. To tackle this issue, in this paper, we propose an improved version of a consensus protocol formalized in our previous work, built on top of the Bitcoin protocol, to incentivize third-party nodes to consistently extend their subchains. Besides, we perform an extensive analysis of this protocol, both defining its properties and presenting some real-world attack scenarios, to show how its specific design choices and parameter configurations can be crucial to prevent malicious practices

    Influencing brain waves by evoked potentials as biometric approach: taking stock of the last six years of research

    Get PDF
    The scientific advances of recent years have made available to anyone affordable hardware devices capable of doing something unthinkable until a few years ago, the reading of brain waves. It means that through small wearable devices it is possible to perform an electroencephalography (EEG), albeit with less potential than those offered by high-cost professional devices. Such devices make it possible for researchers a huge number of experiments that were once impossible in many areas due to the high costs of the necessary hardware. Many studies in the literature explore the use of EEG data as a biometric approach for people identification, but, unfortunately, it presents problems mainly related to the difficulty of extracting unique and stable patterns from users, despite the adoption of sophisticated techniques. An approach to face this problem is based on the evoked potentials (EPs), external stimuli applied during the EEG reading, a noninvasive technique used for many years in clinical routine, in combination with other diagnostic tests, to evaluate the electrical activity related to some areas of the brain and spinal cord to diagnose neurological disorders. In consideration of the growing number of works in the literature that combine the EEG and EP approaches for biometric purposes, this work aims to evaluate the practical feasibility of such approaches as reliable biometric instruments for user identification by surveying the state of the art of the last 6 years, also providing an overview of the elements and concepts related to this research area

    Leveraging the Training Data Partitioning to Improve Events Characterization in Intrusion Detection Systems

    Get PDF
    The ever-increasing use of services based on computer networks, even in crucial areas unthinkable until a few years ago, has made the security of these networks a crucial element for anyone, also in consideration of the increasingly sophisticated techniques and strategies available to attackers. In this context, Intrusion Detection Systems (IDSs) play a primary role since they are responsible for analyzing and classifying each network activity as legitimate or illegitimate, allowing us to take the necessary countermeasures at the appropriate time. However, these systems are not infallible due to several reasons, the most important of which are the constant evolution of the attacks (e.g., zero-day attacks) and the problem that many of the attacks have behavior similar to those of legitimate activities, and therefore they are very hard to identify. This work relies on the hypothesis that the subdivision of the training data used for the IDS classification model definition into a certain number of partitions, in terms of events and features, can improve the characterization of the network events, improving the system performance. The non-overlapping data partitions train independent classification models, classifying the event according to a majority-voting rule. A series of experiments conducted on a benchmark real-world dataset support the initial hypothesis, showing a performance improvement with respect to a canonical training approach

    A local feature engineering strategy to improve network anomaly detection

    Get PDF
    The dramatic increase in devices and services that has characterized modern societies in recent decades, boosted by the exponential growth of ever faster network connections and the predominant use of wireless connection technologies, has materialized a very crucial challenge in terms of security. The anomaly-based intrusion detection systems, which for a long time have represented some of the most efficient solutions to detect intrusion attempts on a network, have to face this new and more complicated scenario. Well-known problems, such as the difficulty of distinguishing legitimate activities from illegitimate ones due to their similar characteristics and their high degree of heterogeneity, today have become even more complex, considering the increase in the network activity. After providing an extensive overview of the scenario under consideration, this work proposes a Local Feature Engineering (LFE) strategy aimed to face such problems through the adoption of a data preprocessing strategy that reduces the number of possible network event patterns, increasing at the same time their characterization. Unlike the canonical feature engineering approaches, which take into account the entire dataset, it operates locally in the feature space of each single event. The experiments conducted on real-world data showed that this strategy, which is based on the introduction of new features and the discretization of their values, improves the performance of the canonical state-of-the-art solutions

    A Region-based Training Data Segmentation Strategy to Credit Scoring

    Get PDF
    The rating of users requesting financial services is a growing task, especially in this historical period of the COVID-19 pandemic characterized by a dramatic increase in online activities, mainly related to e-commerce. This kind of assessment is a task manually performed in the past that today needs to be carried out by automatic credit scoring systems, due to the enormous number of requests to process. It follows that such systems play a crucial role for financial operators, as their effectiveness is directly related to gains and losses of money. Despite the huge investments in terms of financial and human resources devoted to the development of such systems, the state-of-the-art solutions are transversally affected by some well-known problems that make the development of credit scoring systems a challenging task, mainly related to the unbalance and heterogeneity of the involved data, problems to which it adds the scarcity of public datasets. The Region-based Training Data Segmentation (RTDS) strategy proposed in this work revolves around a divide-and-conquer approach, where the user classification depends on the results of several sub-classifications. In more detail, the training data is divided into regions that bound different users and features, which are used to train several classification models that will lead toward the final classification through a majority voting rule. Such a strategy relies on the consideration that the independent analysis of different users and features can lead to a more accurate classification than that offered by a single evaluation model trained on the entire dataset. The validation process carried out using three public real-world datasets with a different number of features. samples, and degree of data imbalance demonstrates the effectiveness of the proposed strategy. which outperforms the canonical training one in the context of all the datasets

    A holistic auto-configurable ensemble machine learning strategy for financial trading

    Get PDF
    Financial markets forecasting represents a challenging task for a series of reasons, such as the irregularity, high fluctuation, noise of the involved data, and the peculiar high unpredictability of the financial domain. Moreover, literature does not offer a proper methodology to systematically identify intrinsic and hyper-parameters, input features, and base algorithms of a forecasting strategy in order to automatically adapt itself to the chosen market. To tackle these issues, this paper introduces a fully automated optimized ensemble approach, where an optimized feature selection process has been combined with an automatic ensemble machine learning strategy, created by a set of classifiers with intrinsic and hyper-parameters learned in each marked under consideration. A series of experiments performed on different real-world futures markets demonstrate the effectiveness of such an approach with regard to both to the Buy and Hold baseline strategy and to several canonical state-of-the-art solutions

    Popularity prediction of instagram posts

    Get PDF
    Predicting the popularity of posts on social networks has taken on significant importance in recent years, and several social media management tools now offer solutions to improve and optimize the quality of published content and to enhance the attractiveness of companies and organizations. Scientific research has recently moved in this direction, with the aim of exploiting advanced techniques such as machine learning, deep learning, natural language processing, etc., to support such tools. In light of the above, in this work we aim to address the challenge of predicting the popularity of a future post on Instagram, by defining the problem as a classification task and by proposing an original approach based on Gradient Boosting and feature engineering, which led us to promising experimental results. The proposed approach exploits big data technologies for scalability and efficiency, and it is general enough to be applied to other social media as well

    A comparison of audio-based deep learning methods for detecting anomalous road events

    Get PDF
    Road surveillance systems have an important role in monitoring roads and safeguarding their users. Many of these systems are based on video streams acquired from urban video surveillance infrastructures, from which it is possible to reconstruct the dynamics of accidents and detect other events. However, such systems may lack accuracy in adverse environmental settings: for instance, poor lighting, weather conditions, and occlusions can reduce the effectiveness of the automatic detection and consequently increase the rate of false or missed alarms. These issues can be mitigated by integrating such solutions with audio analysis modules, that can improve the ability to recognize distinctive events such as car crashes. For this purpose, in this work we propose a preliminary analysis of solutions based on Deep Learning techniques for the automatic identification of hazardous events through the analysis of audio spectrograms
    corecore