3,678 research outputs found

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Energy management system optimization based on an LSTM deep learning model using vehicle speed prediction

    Get PDF
    The energy management of a Hybrid Electric Vehicle (HEV) is a global optimization problem, and its optimal solution inevitably entails knowing the entire mission profile. The exploitation of Vehicle-to-Everything (V2X) connectivity can pave the way for reliable short-term vehicle speed predictions. As a result, the capabilities of conventional energy management strategies can be enhanced by integrating the predicted vehicle speed into the powertrain control strategy. Therefore, in this paper, an innovative Adaptation algorithm uses the predicted speed profile for an Equivalent Consumption Minimization Strategy (A-V2X-ECMS). Driving pattern identification is employed to adapt the equivalence factor of the ECMS when a change in the driving patterns occurs, or when the State of Charge (SoC) experiences a high deviation from the target value. A Principal Component Analysis (PCA) was performed on several energetic indices to select the ones that predominate in characterizing the different driving patterns. Long Short-Term Memory (LSTM) deep neural networks were trained to choose the optimal value of the equivalence factor for a specific sequence of data (i.e., speed, acceleration, power, and initial SoC). The potentialities of the innovative A-V2X-ECMS were assessed, through numerical simulation, on a diesel Plug-in Hybrid Electric Vehicle (PHEV) available on the European market. A virtual test rig of the investigated vehicle was built in the GT-SUITE software environment and validated against a wide database of experimental data. The simulations proved that the proposed approach achieves results much closer to optimal than the conventional energy management strategies taken as a reference

    Computational strategies for a system-level understanding of metabolism

    Get PDF
    Cell metabolism is the biochemical machinery that provides energy and building blocks to sustain life. Understanding its fine regulation is of pivotal relevance in several fields, from metabolic engineering applications to the treatment of metabolic disorders and cancer. Sophisticated computational approaches are needed to unravel the complexity of metabolism. To this aim, a plethora of methods have been developed, yet it is generally hard to identify which computational strategy is most suited for the investigation of a specific aspect of metabolism. This review provides an up-to-date description of the computational methods available for the analysis of metabolic pathways, discussing their main advantages and drawbacks. In particular, attention is devoted to the identification of the appropriate scale and level of accuracy in the reconstruction of metabolic networks, and to the inference of model structure and parameters, especially when dealing with a shortage of experimental measurements. The choice of the proper computational methods to derive in silico data is then addressed, including topological analyses, constraint-based modeling and simulation of the system dynamics. A description of some computational approaches to gain new biological knowledge or to formulate hypotheses is finally provided

    Multi-Antenna Techniques for Next Generation Cellular Communications

    Get PDF
    Future cellular communications are expected to offer substantial improvements for the pre- existing mobile services with higher data rates and lower latency as well as pioneer new types of applications that must comply with strict demands from a wider range of user types. All of these tasks require utmost efficiency in the use of spectral resources. Deploying multiple antennas introduces an additional signal dimension to wireless data transmissions, which provides a significant alternative solution against the plateauing capacity issue of the limited available spectrum. Multi-antenna techniques and the associated key enabling technologies possess unquestionable potential to play a key role in the evolution of next generation cellular systems. Spectral efficiency can be improved on downlink by concurrently serving multiple users with high-rate data connections on shared resources. In this thesis optimized multi-user multi-input multi-output (MIMO) transmissions are investigated on downlink from both filter design and resource allocation/assignment points of view. Regarding filter design, a joint baseband processing method is proposed specifically for high signal-to-noise ratio (SNR) conditions, where the necessary signaling overhead can be compensated for. Regarding resource scheduling, greedy- and genetic-based algorithms are proposed that demand lower complexity with large number of resource blocks relative to prior implementations. Channel estimation techniques are investigated for massive MIMO technology. In case of channel reciprocity, this thesis proposes an overhead reduction scheme for the signaling of user channel state information (CSI) feedback during a relative antenna calibration. In addition, a multi-cell coordination method is proposed for subspace-based blind estimators on uplink, which can be implicitly translated to downlink CSI in the presence of ideal reciprocity. Regarding non-reciprocal channels, a novel estimation technique is proposed based on reconstructing full downlink CSI from a select number of dominant propagation paths. The proposed method offers drastic compressions in user feedback reports and requires much simpler downlink training processes. Full-duplex technology can provide up to twice the spectral efficiency of conventional resource divisions. This thesis considers a full-duplex two-hop link with a MIMO relay and investigates mitigation techniques against the inherent loop-interference. Spatial-domain suppression schemes are developed for the optimization of full-duplex MIMO relaying in a coverage extension scenario on downlink. The proposed methods are demonstrated to generate data rates that closely approximate their global bounds

    Logic-based Technologies for Intelligent Systems: State of the Art and Perspectives

    Get PDF
    Together with the disruptive development of modern sub-symbolic approaches to artificial intelligence (AI), symbolic approaches to classical AI are re-gaining momentum, as more and more researchers exploit their potential to make AI more comprehensible, explainable, and therefore trustworthy. Since logic-based approaches lay at the core of symbolic AI, summarizing their state of the art is of paramount importance now more than ever, in order to identify trends, benefits, key features, gaps, and limitations of the techniques proposed so far, as well as to identify promising research perspectives. Along this line, this paper provides an overview of logic-based approaches and technologies by sketching their evolution and pointing out their main application areas. Future perspectives for exploitation of logic-based technologies are discussed as well, in order to identify those research fields that deserve more attention, considering the areas that already exploit logic-based approaches as well as those that are more likely to adopt logic-based approaches in the future

    Computational design and designability of gene regulatory networks

    Full text link
    Nuestro conocimiento de las interacciones moleculares nos ha conducido hoy hacia una perspectiva ingenieril, donde diseños e implementaciones de sistemas artificiales de regulación intentan proporcionar instrucciones fundamentales para la reprogramación celular. Nosotros aquí abordamos el diseño de redes de genes como una forma de profundizar en la comprensión de las regulaciones naturales. También abordamos el problema de la diseñabilidad dada una genoteca de elementos compatibles. Con este fin, aplicamos métodos heuríticos de optimización que implementan rutinas para resolver problemas inversos, así como herramientas de análisis matemático para estudiar la dinámica de la expresión genética. Debido a que la ingeniería de redes de transcripción se ha basado principalmente en el ensamblaje de unos pocos elementos regulatorios usando principios de diseño racional, desarrollamos un marco de diseño computacional para explotar este enfoque. Modelos asociados a genotecas fueron examinados para descubrir el espacio genotípico asociado a un cierto fenotipo. Además, desarrollamos un procedimiento completamente automatizado para diseñar moleculas de ARN no codificante con capacidad regulatoria, basándonos en un modelo fisicoquímico y aprovechando la regulación alostérica. Los circuitos de ARN resultantes implementaban un mecanismo de control post-transcripcional para la expresión de proteínas que podía ser combinado con elementos transcripcionales. También aplicamos los métodos heurísticos para analizar la diseñabilidad de rutas metabólicas. Ciertamente, los métodos de diseño computacional pueden al mismo tiempo aprender de los mecanismos naturales con el fin de explotar sus principios fundamentales. Así, los estudios de estos sistemas nos permiten profundizar en la ingeniería genética. De relevancia, el control integral y las regulaciones incoherentes son estrategias generales que los organismos emplean y que aquí analizamos.Rodrigo Tarrega, G. (2011). Computational design and designability of gene regulatory networks [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1417

    Ensemble deep learning: A review

    Get PDF
    Ensemble learning combines several individual models to obtain better generalization performance. Currently, deep learning models with multilayer processing architecture is showing better performance as compared to the shallow or traditional classification models. Deep ensemble learning models combine the advantages of both the deep learning models as well as the ensemble learning such that the final model has better generalization performance. This paper reviews the state-of-art deep ensemble models and hence serves as an extensive summary for the researchers. The ensemble models are broadly categorised into ensemble models like bagging, boosting and stacking, negative correlation based deep ensemble models, explicit/implicit ensembles, homogeneous /heterogeneous ensemble, decision fusion strategies, unsupervised, semi-supervised, reinforcement learning and online/incremental, multilabel based deep ensemble models. Application of deep ensemble models in different domains is also briefly discussed. Finally, we conclude this paper with some future recommendations and research directions
    • …
    corecore