897 research outputs found

    Neuromorphic hardware for somatosensory neuroprostheses

    Get PDF
    In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    Sustainable Reservoir Management Approaches under Impacts of Climate Change - A Case Study of Mangla Reservoir, Pakistan

    Get PDF
    Reservoir sedimentation is a major issue for water resource management around the world. It has serious economic, environmental, and social consequences, such as reduced water storage capacity, increased flooding risk, decreased hydropower generation, and deteriorated water quality. Increased rainfall intensity, higher temperatures, and more extreme weather events due to climate change are expected to exacerbate the problem of reservoir sedimentation. As a result, sedimentation must be managed to ensure the long-term viability of reservoirs and their associated infrastructure. Effective reservoir sedimentation management in the face of climate change necessitates an understanding of the sedimentation process and the factors that influence it, such as land use practices, erosion, and climate. Monitoring and modelling sedimentation rates are also useful tools for forecasting future impacts and making management decisions. The goal of this research is to create long-term reservoir management strategies in the face of climate change by simulating the effects of various reservoir-operating strategies on reservoir sedimentation and sediment delta movement at Mangla Reservoir in Pakistan (the second-largest dam in the country). In order to assess the impact of the Mangla Reservoir's sedimentation and reservoir life, a framework was developed. This framework incorporates both hydrological and morphodynamic models and various soft computing models. In addition to taking climate change uncertainty into consideration, the proposed framework also incorporates sediment source, sediment delivery, and reservoir morphology changes. Furthermore, the purpose of this study is to provide a practical methodology based on the limited data available. In the first phase of this study, it was investigated how to accurately quantify the missing suspended sediment load (SSL) data in rivers by utilizing various techniques, such as sediment rating curves (SRC) and soft computing models (SCMs), including local linear regression (LLR), artificial neural networks (ANN) and wavelet-cum-ANN (WANN). Further, the Gamma and M-test were performed to select the best-input variables and appropriate data length for SCMs development. Based on an evaluation of the outcomes of all leading models for SSL estimation, it can be concluded that SCMs are more effective than SRC approaches. Additionally, the results also indicated that the WANN model was the most accurate model for reconstructing the SSL time series because it is capable of identifying the salient characteristics in a data series. The second phase of this study examined the feasibility of using four satellite precipitation datasets (SPDs) which included GPM, PERSIANN_CDR, CHIRPS, and CMORPH to predict streamflow and sediment loads (SL) within a poorly gauged mountainous catchment, by employing the SWAT hydrological model as well as SWAT coupled soft computing models (SCMs) such as artificial neural networks (SWAT-ANN), random forests (SWAT-RF), and support vector regression (SWAT-SVR). SCMs were developed using the outputs of un-calibrated SWAT hydrological models to improve the predictions. The results indicate that during the entire simulation, the GPM shows the best performance in both schemes, while PERSIAN_CDR and CHIRPS also perform well, whereas CMORPH predicts streamflow for the Upper Jhelum River Basin (UJRB) with relatively poor performance. Among the best GPM-based models, SWAT-RF offered the best performance to simulate the entire streamflow, while SWAT-ANN excelled at simulating the SL. Hence, hydrological coupled SCMs based on SPDs could be an effective technique for simulating streamflow and SL, particularly in complex terrain where gauge network density is low or uneven. The third and last phase of this study investigated the impact of different reservoir operating strategies on Mangla reservoir sedimentation using a 1D sediment transport model. To improve the accuracy of the model, more accurate boundary conditions for flow and sediment load were incorporated into the numerical model (derived from the first and second phases of this study) so that the successive morphodynamic model could precisely predict bed level changes under given climate conditions. Further, in order to assess the long-term effect of a changing climate, a Global Climate Model (GCM) under Representative Concentration Pathways (RCP) scenarios 4.5 and 8.5 for the 21st century is used. The long-term modelling results showed that a gradual increase in the reservoir minimum operating level (MOL) slows down the delta movement rate and the bed level close to the dam. However, it may compromise the downstream irrigation demand during periods of high water demand. The findings may help the reservoir managers to improve the reservoir operation rules and ultimately support the objective of sustainable reservoir use for societal benefit. In summary, this study provides comprehensive insights into reservoir sedimentation phenomena and recommends an operational strategy that is both feasible and sustainable over the long term under the impact of climate change, especially in cases where a lack of data exists. Basically, it is very important to improve the accuracy of sediment load estimates, which are essential in the design and operation of reservoir structures and operating plans in response to incoming sediment loads, ensuring accurate reservoir lifespan predictions. Furthermore, the production of highly accurate streamflow forecasts, particularly when on-site data is limited, is important and can be achieved by the use of satellite-based precipitation data in conjunction with hydrological and soft computing models. Ultimately, the use of soft computing methods produces significantly improved input data for sediment load and discharge, enabling the application of one-dimensional hydro-morphodynamic numerical models to evaluate sediment dynamics and reservoir useful life under the influence of climate change at various operating conditions in a way that is adequate for evaluating sediment dynamics.:Chapter 1: Introduction Chapter 2:Reconstruction of Sediment Load Data in Rivers Chapter 3:Assessment of The Hydrological and Coupled Soft Computing Models, Based on Different Satellite Precipitation Datasets, To Simulate Streamflow and Sediment Load in A Mountainous Catchment Chapter 4:Simulating the Impact of Climate Change with Different Reservoir Operating Strategies on Sedimentation of the Mangla Reservoir, Northern Pakistan Chapter 5:Conclusions and Recommendation

    Tradition and Innovation in Construction Project Management

    Get PDF
    This book is a reprint of the Special Issue 'Tradition and Innovation in Construction Project Management' that was published in the journal Buildings

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)

    Get PDF
    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!

    WEIGH-IN-MOTION DATA-DRIVEN PAVEMENT PERFORMANCE PREDICTION MODELS

    Get PDF
    The effective functioning of pavements as a critical component of the transportation system necessitates the implementation of ongoing maintenance programs to safeguard this significant and valuable infrastructure and guarantee its optimal performance. The maintenance, rehabilitation, and reconstruction (MRR) program of the pavement structure is dependent on a multidimensional decision-making process, which considers the existing pavement structural condition and the anticipated future performance. Pavement Performance Prediction Models (PPPMs) have become indispensable tools for the efficient implementation of the MRR program and the minimization of associated costs by providing precise predictions of distress and roughness based on inventory and monitoring data concerning the pavement structure\u27s state, traffic load, and climatic conditions. The integration of PPPMs has become a vital component of Pavement Management Systems (PMSs), facilitating the optimization, prioritization, scheduling, and selection of maintenance strategies. Researchers have developed several PPPMs with differing objectives, and each PPPM has demonstrated distinct strengths and weaknesses regarding its applicability, implementation process, and data requirements for development. Traditional statistical models, such as linear regression, are inadequate in handling complex nonlinear relationships between variables and often generate less precise results. Machine Learning (ML)-based models have become increasingly popular due to their ability to manage vast amounts of data and identify meaningful relationships between them to generate informative insights for better predictions. To create ML models for pavement performance prediction, it is necessary to gather a significant amount of historical data on pavement and traffic loading conditions. The Long-Term Pavement Performance Program (LTPP) initiated by the Federal Highway Administration (FHWA) offers a comprehensive repository of data on the environment, traffic, inventory, monitoring, maintenance, and rehabilitation works that can be utilized to develop PPPMs. The LTPP also includes Weigh-In-Motion (WIM) data that provides information on traffic, such as truck traffic, total traffic, directional distribution, and the number of different axle types of vehicles. High-quality traffic loading data can play an essential role in improving the performance of PPPMs, as the Mechanistic-Empirical Pavement Design Guide (MEPDG) considers vehicle types and axle load characteristics to be critical inputs for pavement design. The collection of high-quality traffic loading data has been a challenge in developing Pavement Performance Prediction Models (PPPMs). The Weigh-In-Motion (WIM) system, which comprises WIM scales, has emerged as an innovative solution to address this issue. By leveraging computer vision and machine learning techniques, WIM systems can collect accurate data on vehicle type and axle load characteristics, which are critical factors affecting the performance of flexible pavements. Excessive dynamic loading caused by heavy vehicles can result in the early disintegration of the pavement structure. The Long-Term Pavement Performance Program (LTPP) provides an extensive repository of WIM data that can be utilized to develop accurate PPPMs for predicting pavement future behavior and tolerance. The incorporation of comprehensive WIM data collected from LTPP has the potential to significantly improve the accuracy and effectiveness of PPPMs. To develop artificial neural network (ANN) based pavement performance prediction models (PPPMs) for seven distinct performance indicators, including IRI, longitudinal crack, transverse crack, fatigue crack, potholes, polished aggregate, and patch failure, a total of 300 pavement sections with WIM data were selected from the United States of America. Data collection spanned 20 years, from 2001 to 2020, and included information on pavement age, material properties, climatic properties, structural properties, and traffic-related characteristics. The primary dataset was then divided into two distinct subsets: one which included WIMgenerated traffic data and another which excluded WIM-generated traffic data. Data cleaning and normalization were meticulously performed using the Z-score normalization method. Each subset was further divided into two separate groups: the first containing 15 years of data for model training and the latter containing 5 years of data for testing purposes. Principal Component Analysis (PCA) was then employed to reduce the number of input variables for the model. Based on a cumulative Proportion of Variation (PoV) of 96%, 12 input variables were selected. Subsequently, a single hidden layer ANN model with 12 neurons was generated for each performance indicator. The study\u27s results indicate that incorporating Weigh-In-Motion (WIM)-generated traffic loading data can significantly enhance the accuracy and efficacy of pavement performance prediction models (PPPMs). This improvement further supports the suitability of optimized pavement maintenance scheduling with minimal costs, while also ensuring timely repairs to promote acceptable serviceability and structural stability of the pavement. The contributions of this research are twofold: first, it provides an enhanced understanding of the positive impacts that high-quality traffic loading data has on pavement conditions; and second, it explores potential applications of WIM data within the Pavement Management System (PMS)

    Behavior quantification as the missing link between fields: Tools for digital psychiatry and their role in the future of neurobiology

    Full text link
    The great behavioral heterogeneity observed between individuals with the same psychiatric disorder and even within one individual over time complicates both clinical practice and biomedical research. However, modern technologies are an exciting opportunity to improve behavioral characterization. Existing psychiatry methods that are qualitative or unscalable, such as patient surveys or clinical interviews, can now be collected at a greater capacity and analyzed to produce new quantitative measures. Furthermore, recent capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometer, open avenues of novel questioning that were previously entirely unrealistic. Their temporally dense nature enables a cohesive study of real-time neural and behavioral signals. To develop comprehensive neurobiological models of psychiatric disease, it will be critical to first develop strong methods for behavioral quantification. There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge -- one that will necessitate new data processing tools, new machine learning techniques, and ultimately a shift in how interdisciplinary work is conducted. In my thesis, I detail research projects that take different perspectives on digital psychiatry, subsequently tying ideas together with a concluding discussion on the future of the field. I also provide software infrastructure where relevant, with extensive documentation. Major contributions include scientific arguments and proof of concept results for daily free-form audio journals as an underappreciated psychiatry research datatype, as well as novel stability theorems and pilot empirical success for a proposed multi-area recurrent neural network architecture.Comment: PhD thesis cop
    • …
    corecore