1,279 research outputs found

    A Trust Management Framework for Vehicular Ad Hoc Networks

    Get PDF
    The inception of Vehicular Ad Hoc Networks (VANETs) provides an opportunity for road users and public infrastructure to share information that improves the operation of roads and the driver experience. However, such systems can be vulnerable to malicious external entities and legitimate users. Trust management is used to address attacks from legitimate users in accordance with a user’s trust score. Trust models evaluate messages to assign rewards or punishments. This can be used to influence a driver’s future behaviour or, in extremis, block the driver. With receiver-side schemes, various methods are used to evaluate trust including, reputation computation, neighbour recommendations, and storing historical information. However, they incur overhead and add a delay when deciding whether to accept or reject messages. In this thesis, we propose a novel Tamper-Proof Device (TPD) based trust framework for managing trust of multiple drivers at the sender side vehicle that updates trust, stores, and protects information from malicious tampering. The TPD also regulates, rewards, and punishes each specific driver, as required. Furthermore, the trust score determines the classes of message that a driver can access. Dissemination of feedback is only required when there is an attack (conflicting information). A Road-Side Unit (RSU) rules on a dispute, using either the sum of products of trust and feedback or official vehicle data if available. These “untrue attacks” are resolved by an RSU using collaboration, and then providing a fixed amount of reward and punishment, as appropriate. Repeated attacks are addressed by incremental punishments and potentially driver access-blocking when conditions are met. The lack of sophistication in this fixed RSU assessment scheme is then addressed by a novel fuzzy logic-based RSU approach. This determines a fairer level of reward and punishment based on the severity of incident, driver past behaviour, and RSU confidence. The fuzzy RSU controller assesses judgements in such a way as to encourage drivers to improve their behaviour. Although any driver can lie in any situation, we believe that trustworthy drivers are more likely to remain so, and vice versa. We capture this behaviour in a Markov chain model for the sender and reporter driver behaviours where a driver’s truthfulness is influenced by their trust score and trust state. For each trust state, the driver’s likelihood of lying or honesty is set by a probability distribution which is different for each state. This framework is analysed in Veins using various classes of vehicles under different traffic conditions. Results confirm that the framework operates effectively in the presence of untrue and inconsistent attacks. The correct functioning is confirmed with the system appropriately classifying incidents when clarifier vehicles send truthful feedback. The framework is also evaluated against a centralized reputation scheme and the results demonstrate that it outperforms the reputation approach in terms of reduced communication overhead and shorter response time. Next, we perform a set of experiments to evaluate the performance of the fuzzy assessment in Veins. The fuzzy and fixed RSU assessment schemes are compared, and the results show that the fuzzy scheme provides better overall driver behaviour. The Markov chain driver behaviour model is also examined when changing the initial trust score of all drivers

    La traduzione specializzata all’opera per una piccola impresa in espansione: la mia esperienza di internazionalizzazione in cinese di Bioretics© S.r.l.

    Get PDF
    Global markets are currently immersed in two all-encompassing and unstoppable processes: internationalization and globalization. While the former pushes companies to look beyond the borders of their country of origin to forge relationships with foreign trading partners, the latter fosters the standardization in all countries, by reducing spatiotemporal distances and breaking down geographical, political, economic and socio-cultural barriers. In recent decades, another domain has appeared to propel these unifying drives: Artificial Intelligence, together with its high technologies aiming to implement human cognitive abilities in machinery. The “Language Toolkit – Le lingue straniere al servizio dell’internazionalizzazione dell’impresa” project, promoted by the Department of Interpreting and Translation (Forlì Campus) in collaboration with the Romagna Chamber of Commerce (Forlì-Cesena and Rimini), seeks to help Italian SMEs make their way into the global market. It is precisely within this project that this dissertation has been conceived. Indeed, its purpose is to present the translation and localization project from English into Chinese of a series of texts produced by Bioretics© S.r.l.: an investor deck, the company website and part of the installation and use manual of the Aliquis© framework software, its flagship product. This dissertation is structured as follows: Chapter 1 presents the project and the company in detail; Chapter 2 outlines the internationalization and globalization processes and the Artificial Intelligence market both in Italy and in China; Chapter 3 provides the theoretical foundations for every aspect related to Specialized Translation, including website localization; Chapter 4 describes the resources and tools used to perform the translations; Chapter 5 proposes an analysis of the source texts; Chapter 6 is a commentary on translation strategies and choices

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Runway Safety Improvements Through a Data Driven Approach for Risk Flight Prediction and Simulation

    Get PDF
    Runway overrun is one of the most frequently occurring flight accident types threatening the safety of aviation. Sensors have been improved with recent technological advancements and allow data collection during flights. The recorded data helps to better identify the characteristics of runway overruns. The improved technological capabilities and the growing air traffic led to increased momentum for reducing flight risk using artificial intelligence. Discussions on incorporating artificial intelligence to enhance flight safety are timely and critical. Using artificial intelligence, we may be able to develop the tools we need to better identify runway overrun risk and increase awareness of runway overruns. This work seeks to increase attitude, skill, and knowledge (ASK) of runway overrun risks by predicting the flight states near touchdown and simulating the flight exposed to runway overrun precursors. To achieve this, the methodology develops a prediction model and a simulation model. During the flight training process, the prediction model is used in flight to identify potential risks and the simulation model is used post-flight to review the flight behavior. The prediction model identifies potential risks by predicting flight parameters that best characterize the landing performance during the final approach phase. The predicted flight parameters are used to alert the pilots for any runway overrun precursors that may pose a threat. The predictions and alerts are made when thresholds of various flight parameters are exceeded. The flight simulation model simulates the final approach trajectory with an emphasis on capturing the effect wind has on the aircraft. The focus is on the wind since the wind is a relatively significant factor during the final approach; typically, the aircraft is stabilized during the final approach. The flight simulation is used to quickly assess the differences between fight patterns that have triggered overrun precursors and normal flights with no abnormalities. The differences are crucial in learning how to mitigate adverse flight conditions. Both of the models are created with neural network models. The main challenges of developing a neural network model are the unique assignment of each model design space and the size of a model design space. A model design space is unique to each problem and cannot accommodate multiple problems. A model design space can also be significantly large depending on the depth of the model. Therefore, a hyperparameter optimization algorithm is investigated and used to design the data and model structures to best characterize the aircraft behavior during the final approach. A series of experiments are performed to observe how the model accuracy change with different data pre-processing methods for the prediction model and different neural network models for the simulation model. The data pre-processing methods include indexing the data by different frequencies, by different window sizes, and data clustering. The neural network models include simple Recurrent Neural Networks, Gated Recurrent Units, Long Short Term Memory, and Neural Network Autoregressive with Exogenous Input. Another series of experiments are performed to evaluate the robustness of these models to adverse wind and flare. This is because different wind conditions and flares represent controls that the models need to map to the predicted flight states. The most robust models are then used to identify significant features for the prediction model and the feasible control space for the simulation model. The outcomes of the most robust models are also mapped to the required landing distance metric so that the results of the prediction and simulation are easily read. Then, the methodology is demonstrated with a sample flight exposed to an overrun precursor, and high approach speed, to show how the models can potentially increase attitude, skill, and knowledge of runway overrun risk. The main contribution of this work is on evaluating the accuracy and robustness of prediction and simulation models trained using Flight Operational Quality Assurance (FOQA) data. Unlike many studies that focused on optimizing the model structures to create the two models, this work optimized both data and model structures to ensure that the data well capture the dynamics of the aircraft it represents. To achieve this, this work introduced a hybrid genetic algorithm that combines the benefits of conventional and quantum-inspired genetic algorithms to quickly converge to an optimal configuration while exploring the design space. With the optimized model, this work identified the data features, from the final approach, with a higher contribution to predicting airspeed, vertical speed, and pitch angle near touchdown. The top contributing features are altitude, angle of attack, core rpm, and air speeds. For both the prediction and the simulation models, this study goes through the impact of various data preprocessing methods on the accuracy of the two models. The results may help future studies identify the right data preprocessing methods for their work. Another contribution from this work is on evaluating how flight control and wind affect both the prediction and the simulation models. This is achieved by mapping the model accuracy at various levels of control surface deflection, wind speeds, and wind direction change. The results saw fairly consistent prediction and simulation accuracy at different levels of control surface deflection and wind conditions. This showed that the neural network-based models are effective in creating robust prediction and simulation models of aircraft during the final approach. The results also showed that data frequency has a significant impact on the prediction and simulation accuracy so it is important to have sufficient data to train the models in the condition that the models will be used. The final contribution of this work is on demonstrating how the prediction and the simulation models can be used to increase awareness of runway overrun.Ph.D

    The Active CryoCubeSat Technology: Active Thermal Control for Small Satellites

    Get PDF
    Modern CubeSats and Small Satellites have advanced in capability to tackle science and technology missions that would usually be reserved for more traditional, large satellites. However, this rapid growth in capability is only possible through the fast-to-production, low-cost, and advanced technology approach used by modern small satellite engineers. Advanced technologies in power generation, energy storage, and high-power density electronics have naturally led to a thermal bottleneck, where CubeSats and Small Satellites can generate more power than they can easily reject. The Active CryoCubeSat (ACCS) is an advanced active thermal control technology (ATC) for Small Satellites and CubeSats, which hopes to help solve this thermal problem. The ACCS technology is based on a two-stage design. An integrated miniature cryocooler forms the first stage, and a single-phase mechanically pumped fluid loop heat exchanger the second. The ACCS leverages advanced 3D manufacturing techniques to integrate the ATC directly into the satellite structure, which helps to improve the performance while simultaneously miniaturizing and simplifying the system. The ACCS system can easily be scaled to mission requirements and can control zonal temperature, bulk thermal rejection, and dynamic heat transfer within a satellite structure. The integrated cryocooler supports cryogenic science payloads such as advanced LWIR electro-optical detectors. The ACCS hopes to enable future advanced CubeSat and Small Satellite missions in earth science, heliophysics, and deep space operations. This dissertation will detail the design, development, and testing of the ACCS system technology

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Dynamic simulation and power control of a hybrid solar-wind-fuel cell residential microgrid

    Get PDF
    Hydrogen-based hybrid renewable energy systems (HRES) are rapidly advancing since they use green technologies for power generators and for back-up-power generators to satisfy the increasing energy demand with minimal greenhouse gas emissions. Combining current renewable energy technologies with energy storage systems can promote energy security, decentralize the electrification process, and expand access to electricity in remote and/or localized areas. Unlike battery-based HRES, hydrogen-based HRESs are advantageous since they have faster energy response, and they do not require periodic charging and discharging. The goal of this project was to explore and evaluate the ability of a hydrogen-based HRES consisting of a solar energy system, wind energy system, and a proton-exchange membrane fuel cell (PEMFC) stack to meet the local energy demand in Fukuoka, Japan under different weather conditions with and without the battery storage system. The feasibility of hydrogen storage was also studied for the PEMFC stack and battery back-up HRES. MATLAB and Simulink were used to model the performance of this HRES using the PEMFC stack as the back-up generator with and without the battery storage system. The system was evaluated using meteorological data from four different weather scenarios, each for a span of 72 hours, that occurred in Fukuoka, Japan: 1) Clear days, 2) High wind speed days, 3) Cloudy days, and 4) Raining days. The results indicated that connecting the PEMFC stack to the HRES with the battery storage system could satisfy the load demand in all four weather scenarios with an added advantage of a consistent supply of output power from the PEMFC stack if the supply of hydrogen fuel to the stack was met. Additionally, the battery helped this configuration to reach a reference voltage of 12V, which is important for the HRES to function optimally. The PEMFC stack also positively impacted the SOC of the battery. These results imply that using a dual energy storage configuration consisting of a PEMFC stack and battery could potentially reduce the battery size due to reduced power requirements from the battery. Lastly, an HRES with just the PEMFC stack as back-up showed large fluctuations in the DC bus voltage that prevented the system from responding accurately to the various energy generation sources. In summary, real-time data validation helped to confirm if the use of PEMFC stack is a viable and reasonable back-up storage option for HRES

    Emerging Power Electronics Technologies for Sustainable Energy Conversion

    Get PDF
    This Special Issue summarizes, in a single reference, timely emerging topics related to power electronics for sustainable energy conversion. Furthermore, at the same time, it provides the reader with valuable information related to open research opportunity niches

    A Combined Numerical and Experimental Approach for Rolling Bearing Modelling and Prognostics

    Get PDF
    Rolling-element bearings are widely employed components which cover a major role in the NVH behaviour of the mechanical systems in which they are inserted. Therefore, it is crucial to thoroughly understand their fundamental properties and accurately quantify their most relevant parameters. Moreover, their inevitable failure due to contact fatigue leads to the necessity of correctly describing their dynamic behaviour. In fact, they permit to develop diagnostic and prognostic schemes, which are heavily requested in the nowadays industrial scenario due to their increasingly important role in the development of efficient maintenance strategies. As a result, throughout the years several techniques have been developed by researchers to address different challenges related to the modelling of these components. Within this context, this thesis aims at improving the available methods and at proposing novel approaches to tackle the modelling of rolling-element bearings both in case of static and dynamic simulations. In particular, the dissertation is divided in three major topics related to this field, i.e. the estimation of bearing radial stiffness trough the finite-element method, the lumped-parameter modelling of defective bearings and the development of physics-based prognostic models. The first part of the thesis deals with the finite-element simulations of rolling-element bearings. In particular, the investigation aims at providing an efficient procedure for the generation of load-dependent meshes. The method is developed with the primary objective of determining the radial stiffness of the examined components. In this regard, the main contribution to the subject is the definition of mesh element dimensions on the basis of analytical formulae and in the proposed methodology for the estimation of bearing stiffness. Then, the second part describes a multi-objective optimization technique for the estimation of unknown parameters in lumped parameter models of defective bearings. In fact, it was observed that several parameters which are commonly inserted in these models are hardly measurable or rather denoted by a high degree of uncertainty. On this basis, an optimization procedure aimed at minimizing the difference between experimental and numerical results is proposed. The novelty of the technique lies in the approach developed to tackle the problem and its peculiar implementation in the context of bearing lumped-parameter models. Lastly, the final part of the dissertation is devoted to the development of physics-based prognostic models. Specifically, two models are detailed, both based on a novel degradation-related parameter, i.e. the Equivalent Damaged Volume (EDV). An algorithm capable of extracting this quantity from experimental data is detailed. Then, EDV values are used as input parameters for two prognostic models. The first one aims at predicting the bearing vibration under different operative conditions with respect to a given reference deterioration history. On the other hand, the objective of the second model is to predict the time until a certain threshold on the equivalent damaged volume is crossed, regardless of the applied load and the shaft rotation speed. Therefore, the original aspect of this latter part is the development of prognostic models based on a novel indicator specifically introduced in this work. Results obtained from all proposed models are validated through analytical methods retrieved from the literature and by comparison with data acquired on a dedicated test bench. To this end, a test rig which was set-up at the Engineering Department of the University of Ferrara was employed to perform two type of tests, i.e. stationary tests on bearings with artificial defects and run-to-failure tests on initially healthy bearings. The characteristics of acceleration signals acquired during both tests are extensively discussed.I cuscinetti a rotolamento sono componenti meccanici che influenzano in maniera considerevole il comportamento dinamico dei sistemi all’interno dei quali sono montati. Pertanto, è di fondamentale importanza possedere strumenti atti alla stima dei loro parametri più rilevanti e avere a disposizione modelli dedicati allo studio delle loro caratteristiche dinamiche. Questo aspetto è di estrema importanza soprattutto nell’ottica dello sviluppo di schemi di diagnostica e prognostica, i quali sono sempre più richiesti all’interno dello scenario industriale odierno. In questo contesto, questa tesi si propone di migliorare le tecniche numeriche già esistenti e di fornire nuovi approcci per la modellazione dei cuscinetti a rotolamento sia nel caso di problemi statici che dinamici. Nello specifico, il lavoro tratta in maniera dettagliata tre diversi argomenti relativi a questo tema, ossia la stima della rigidezza radiale tramite il metodo agli elementi finiti, la modellazione a parametri concentrati di cuscinetti con difetti e lo sviluppo di modelli di prognostica physics-based. La prima parte della tesi concerne la simulazione di cuscinetti a rotolamento tramite il metodo ad elementi finiti. In particolare, lo studio si propone di fornire una procedura per la generazione di griglie le cui dimensioni dipendano dal carico applicato. Il metodo è sviluppato con l’obbiettivo di stimare in maniera computazionalmente efficace la rigidezza radiale del componente in esame. Pertanto, il contributo principale sul tema dato da questa prima parte riguarda il metodo analitico che permette di definire a priori le dimensioni degli elementi che costituiscono la mesh e la metodologia utilizzata per la stima della rigidezza. La seconda parte descrive una procedura di ottimizzazione multi obbiettivo per la stima dei parametri incogniti all’interno dei modelli a parametri concentrati di cuscinetti con difetti. Questa esigenza nasce dall’osservazione che numerosi parametri tipicamente inseriti in questa tipologia di modelli sono difficilmente misurabili oppure caratterizzati da un alto grado di incertezza. Perciò, nella seconda parte viene introdotta una tecnica innovativa che consente di stimare i parametri di modello che minimizzano la differenza fra risultati numerici e sperimentali in diverse condizioni di funzionamento. Infine, l’ultima parte è dedicata allo sviluppo di modelli di prognostica physics-based. A tal riguardo, vengono dettagliati due modelli basati su un nuovo indicatore di degrado del cuscinetto, denominato Equivalent Damaged Volume (EDV). Questo indicatore viene calcolato durante il funzionamento del cuscinetto tramite un algoritmo dedicato. I valori così ottenuti sono poi utilizzati come dati di input per i due modelli prognostici. Il primo mira a predire la vibrazione del cuscinetto in condizioni operative diverse rispetto ad una storia di degrado di riferimento. Diversamente, il secondo modello permette di prevedere il tempo rimanente prima del superamento di una soglia critica di volume equivalente danneggiato, indipendentemente da carico applicato e velocità di rotazione. Dunque, l’aspetto originale di quest’ultima parte ricade nello sviluppo di tecniche prognostiche basate su un nuovo indicatore introdotto ad-hoc in questo lavoro. I risultati ottenuti da tutti i modelli proposti sono validati grazie a metodi analitici di letteratura e a dati acquisiti in laboratorio per mezzo di un banco prova installato presso il Dipartimento di Ingegneria dell’Università di Ferrara. Il banco prova è stato utilizzato per realizzare due tipologie di prove, ossia test stazionari su cuscinetti che presentano difetti artificiali e prove di tipo run-to-failure su cuscinetti inizialmente sani. Le caratteristiche dei segnali di accelerazione acquisiti in entrambe le prove sono discussi in maniera esaustiva
    corecore