4,259 research outputs found

    Virtual Runtime Application Partitions for Resource Management in Massively Parallel Architectures

    Get PDF
    This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.Siirretty Doriast

    Design and development of a generic spatial decision support system, based on artificial intelligence and multicriteria decision analysis

    Get PDF
    A new integrated and generic Spatial Decision Support System (SDSS) is presented based on a combination of Artificial Intelligence and Multicriteria Decision Analysis techniques. The approach proposed is developed to address commonly faced spatial decision problems of site selection, site ranking, impact assessment and spatial knowledge discovery under one system. The site selection module utilises a theme-based Analytical Hierarchy Process. Two novel site ranking techniques are introduced. The first is based on a systematic neighbourhood comparison of sites with respect to key datasets (criterions). The second utilises multivariate ordering capability of one-dimensional Self-Organizing Maps. The site impact assessment module utilises a new spatially enabled Rapid Impact Assessment Matrix. A spatial variant of General Regression Neural Networks is developed for Geographically Weighted Regression (GWR) and prediction analysis. The developed system is proposed as a useful modern tool that facilitates quantitative and evidence based decision making in multicriteria decision environment. The intended users of the system are decision makers in government organisations, in particular those involved in planning and development when taking into account socio-economic, environmental and public health related issues

    Load forecasting on the user‐side by means of computational intelligence algorithms

    Get PDF
    Nowadays, it would be very difficult to deny the need to prioritize sustainable development through energy efficiency at all consumption levels. In this context, an energy management system (EMS) is a suitable option for continuously improving energy efficiency, particularly on the user side. An EMS is a set of technological tools that manages energy consumption information and allows its analysis. EMS, in combination with information technologies, has given rise to intelligent EMS (iEMS), which, aside from lending support to monitoring and reporting functions as an EMS does, it has the ability to model, forecast, control and diagnose energy consumption in a predictive way. The main objective of an iEMS is to continuously improve energy efficiency (on-line) as automatically as possible. The core of an iEMS is its load modeling forecasting system (LMFS). It takes advantage of historical information on energy consumption and energy-related variables in order to model and forecast load profiles and, if available, generator profiles. These models and forecasts are the main information used for iEMS applications for control and diagnosis. That is why in this thesis we have focused on the study, analysis and development of LMFS on the user side. The fact that the LMFS is applied on the user side to support an iEMS means that specific characteristics are required that in other areas of load forecasting they are not. First of all, the user-side load profiles (LPs) have a higher random behavior than others, as for example, in power system distribution or generation. This makes the modeling and forecasting process more difficult. Second, on the user side --for example an industrial user-- there is a high number and variety of places that can be monitored, modeled and forecasted, as well as their precedence or nature. Thus, on the one hand, an LMFS requires a high degree of autonomy to automatically or autonomously generate the demanded models. And on the other hand, it needs a high level of adaptability in order to be able to model and forecast different types of loads and different types of energies. Therefore, the addressed LMFS are those that do not look only for accuracy, but also adaptability and autonomy. Seeking to achieve these objectives, in this thesis work we have proposed three novel LMFS schemes based on hybrid algorithms from computational intelligence, signal processing and statistical theory. The first of them looked to improve adaptability, keeping in mind the importance of accuracy and autonomy. It was called an evolutionary training algorithm (ETA) and is based on adaptivenetwork-based-fuzzy-inference system (ANFIS) that is trained by a multi-objective genetic algorithm instead of its traditional training algorithm. As a result of this hybrid, the generalization capacity was improved (avoiding overfitting) and an easily adaptable training algorithm for new adaptive networks based on traditional ANFIS was obtained. The second scheme deals with LMF autonomy in order to build models from multiple loads automatically. Similar to the previous proposal, an ANFIS and a MOGA were used. In this case, the MOGA was used to find a near-optimal configuration for the ANFIS instead of training it. The LMFS relies on this configuration to work properly, as well as to maintain accuracy and generalization capabilities. Real data from an industrial scenario were used to test the proposed scheme and the multi-site modeling and self-configuration results were satisfactory. Furthermore, other algorithms were satisfactorily designed and tested for processing raw data in outlier detection and gap padding. The last of the proposed approaches sought to improve accuracy while keeping autonomy and adaptability. It took advantage of dominant patterns (DPs) that have lower time resolution than the target LP, so they are easier to model and forecast. The Hilbert-Huang transform and Hilbert-spectral analysis were used for detecting and selecting the DPs. Those selected were used in a proposed scheme of partial models (PM) based on parallel ANFIS or artificial neural networks (ANN) to extract the information and give it to the main PM. Therefore, LMFS accuracy improved and the user-side LP noising problem was reduced. Additionally, in order to compensate for the added complexity, versions of self-configured sub-LMFS for each PM were used. This point was fundamental since, the better the configuration, the better the accuracy of the model; and subsequently the information provided to the main partial model was that much better. Finally, and to close this thesis, an outlook of trends regarding iEMS and an outline of several hybrid algorithms that are pending study and testing are presented.En el contexto energético actual y particularmente en el lado del usuario, el concepto de sistema de gestión energética (EMS) se presenta como una alternativa apropiada para mejorar continuamente la eficiencia energética. Los EMSs en combinación con las tecnologías informáticas dan origen al concepto de iEMS, que además de soportar las funciones de los EMS, tienen la capacidad de modelar, pronosticar, controlar y supervisar los consumos energéticos. Su principal objetivo es el de realizar una mejora continua, lo más autónoma posible y predictiva de la eficiencia energética. Este tipo de sistemas tienen como núcleo fundamental el sistema de modelado y pronóstico de consumos (Load Modeling and Forecasting System, LMFS). El LMFS está habilitado para pronosticar el comportamiento futuro de cargas y, si es necesario, de generadores. Es sobre estos pronósticos sobre los cuales el iEMS puede realizar sus tareas automáticas y predictivas de optimización y supervisión. Los LMFS en el lado del usuario son el foco de esta tesis. Un LMFS en el lado del usuario, diseñado para soportar un iEMS requiere o demanda ciertas características que en otros contextos no serían tan necesarias. En primera estancia, los perfiles de los usuarios tienen un alto grado de aleatoriedad que los hace más difíciles de pronosticar. Segundo, en el lado del usuario, por ejemplo en la industria, el gran número de puntos a modelar requiere que el LMFS tenga por un lado, un nivel elevado de autonomía para generar de la manera más desatendida posible los modelos. Por otro lado, necesita un nivel elevado de adaptabilidad para que, usando la misma estructura o metodología, pueda modelar diferentes tipos de cargas cuya procedencia pude variar significativamente. Por lo tanto, los sistemas de modelado abordados en esta tesis son aquellos que no solo buscan mejorar la precisión, sino también la adaptabilidad y autonomía. En busca de estos objetivos y soportados principalmente por algoritmos de inteligencia computacional, procesamiento de señales y estadística, hemos propuesto tres algoritmos novedosos para el desarrollo de un LMFS en el lado del usuario. El primero de ellos busca mejorar la adaptabilidad del LMFS manteniendo una buena precisión y capacidad de autonomía. Denominado ETA, consiste del uso de una estructura ANFIS que es entrenada por un algoritmo genético multi objetivo (MOGA). Como resultado de este híbrido, obtenemos un algoritmo con excelentes capacidades de generalización y fácil de adaptar para el entrenamiento y evaluación de nuevas estructuras adaptativas basadas en ANFIS. El segundo de los algoritmos desarrollados aborda la autonomía del LMFS para así poder generar modelos de múltiples cargas. Al igual que en la anterior propuesta usamos un ANFIS y un MOGA, pero esta vez el MOGA en vez de entrenar el ANFIS, se utiliza para encontrar la configuración cuasi-óptima del ANFIS. Encontrar la configuración apropiada de un ANFIS es muy importante para obtener un buen funcionamiento del LMFS en lo que a precisión y generalización respecta. El LMFS propuesto, además de configurar automáticamente el ANFIS, incluyó diversos algoritmos para procesar los datos puros que casi siempre estuvieron contaminados de datos espurios y gaps de información, operando satisfactoriamente en las condiciones de prueba en un escenario real. El tercero y último de los algoritmos buscó mejorar la precisión manteniendo la autonomía y adaptabilidad, aprovechando para ello la existencia de patrones dominantes de más baja resolución temporal que el consumo objetivo, y que son más fáciles de modelar y pronosticar. La metodología desarrollada se basa en la transformada de Hilbert-Huang para detectar y seleccionar tales patrones dominantes. Además, esta metodología define el uso de modelos parciales de los patrones dominantes seleccionados, para mejorar la precisión del LMFS y mitigar el problema de aleatoriedad que afecta a los consumos en el lado del usuario. Adicionalmente, se incorporó el algoritmo de auto configuración que se presentó en la propuesta anterior para hallar la configuración cuasi-óptima de los modelos parciales. Este punto fue crucial puesto que a mejor configuración de los modelos parciales mayor es la mejora en precisión del pronóstico final. Finalmente y para cerrar este trabajo de tesis, se realizó una prospección de las tendencias en cuanto al uso de iEMS y se esbozaron varias propuestas de algoritmos híbridos, cuyo estudio y comprobación se plantea en futuros estudios

    Extraction of single-trial cortical beta oscillatory activities in EEG signals using empirical mode decomposition

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Brain oscillatory activities are stochastic and non-linearly dynamic, due to their non-phase-locked nature and inter-trial variability. Non-phase-locked rhythmic signals can vary from trial-to-trial dependent upon variations in a subject's performance and state, which may be linked to fluctuations in expectation, attention, arousal, and task strategy. Therefore, a method that permits the extraction of the oscillatory signal on a single-trial basis is important for the study of subtle brain dynamics, which can be used as probes to study neurophysiology in normal brain and pathophysiology in the diseased.</p> <p>Methods</p> <p>This paper presents an empirical mode decomposition (EMD)-based spatiotemporal approach to extract neural oscillatory activities from multi-channel electroencephalograph (EEG) data. The efficacy of this approach manifests in extracting single-trial post-movement beta activities when performing a right index-finger lifting task. In each single trial, an EEG epoch recorded at the channel of interest (CI) was first separated into a number of intrinsic mode functions (IMFs). Sensorimotor-related oscillatory activities were reconstructed from sensorimotor-related IMFs chosen by a spatial map matching process. Post-movement beta activities were acquired by band-pass filtering the sensorimotor-related oscillatory activities within a trial-specific beta band. Signal envelopes of post-movement beta activities were detected using amplitude modulation (AM) method to obtain post-movement beta event-related synchronization (PM-bERS). The maximum amplitude in the PM-bERS within the post-movement period was subtracted by the mean amplitude of the reference period to find the single-trial beta rebound (BR).</p> <p>Results</p> <p>The results showed single-trial BRs computed by the current method were significantly higher than those obtained from conventional average method (<it>P </it>< 0.01; matched-pair Wilcoxon test). The proposed method provides high signal-to-noise ratio (SNR) through an EMD-based decomposition and reconstruction process, which enables event-related oscillatory activities to be examined on a single-trial basis.</p> <p>Conclusions</p> <p>The EMD-based method is effective for artefact removal and extracting reliable neural features of non-phase-locked oscillatory activities in multi-channel EEG data. The high extraction rate of the proposed method enables the trial-by-trial variability of oscillatory activities can be examined, which provide a possibility for future profound study of subtle brain dynamics.</p

    A data-driven approach with uncertainty quantification for predicting future capacities and remaining useful life of lithium-ion battery

    Get PDF
    Predicting future capacities and remaining useful life (RUL) with uncertainty quantification is a key but challenging issue in the applications of battery health diagnosis and management. This paper applies advanced machine-learning techniques to achieve effective future capacities and RUL prediction for lithium-ion batteries with reliable uncertainty management. To be specific, after using the empirical mode decomposition (EMD) method, the original battery capacity data is decomposed into some intrinsic mode functions (IMFs) and a residual. Then the long short term memory (LSTM) sub-model is applied to estimate the residual while the gaussian process regression (GPR) sub-model is utilized to fit the IMFs with the uncertainty level. Consequently, both the long-term dependence of capacity and uncertainty quantification caused by the capacity regenerations can be captured directly and simultaneously. Experimental aging data from different batteries are deployed to evaluate the performance of proposed LSTM+GPR model in comparison with the solo GPR, solo LSTM, GPR+EMD and LSTM+EMD models. Illustrative results demonstrate the combined LSTM+GPR model outperforms other counterparts and is capable of achieving accurate results for both 1-step and multi-step ahead capacity predictions. Even predicting the RUL at the early battery cycle stage, the proposed data-driven approach still presents good adaptability and reliable uncertainty quantification for battery health diagnosis

    Techniques for the realization of ultra- reliable spaceborne computer Final report

    Get PDF
    Bibliography and new techniques for use of error correction and redundancy to improve reliability of spaceborne computer

    Neural upscaling from residue-level protein structure networks to atomistic structures

    Get PDF
    Coarse-graining is a powerful tool for extending the reach of dynamic models of proteins and other biological macromolecules. Topological coarse-graining, in which biomolecules or sets thereof are represented via graph structures, is a particularly useful way of obtaining highly com-pressed representations of molecular structures, and simulations operating via such representations can achieve substantial computational savings. A drawback of coarse-graining, however, is the loss of atomistic detail—an effect that is especially acute for topological representations such as protein structure networks (PSNs). Here, we introduce an approach based on a combination of machine learning and physically-guided refinement for inferring atomic coordinates from PSNs. This “neural upscaling” procedure exploits the constraints implied by PSNs on possible configurations, as well as differences in the likelihood of observing different configurations with the same PSN. Using a 1 µs atomistic molecular dynamics trajectory of Aβ1–40, we show that neural upscaling is able to effectively recapitulate detailed structural information for intrinsically disordered proteins, being particularly successful in recovering features such as transient secondary structure. These results suggest that scalable network-based models for protein structure and dynamics may be used in settings where atomistic detail is desired, with upscaling employed to impute atomic coordinates from PSNs

    The Fifth NASA Symposium on VLSI Design

    Get PDF
    The fifth annual NASA Symposium on VLSI Design had 13 sessions including Radiation Effects, Architectures, Mixed Signal, Design Techniques, Fault Testing, Synthesis, Signal Processing, and other Featured Presentations. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The presentations share insights into next generation advances that will serve as a basis for future VLSI design

    CATHe: Detection of remote homologues for CATH superfamilies using embeddings from protein language models

    Get PDF
    MOTIVATION: CATH is a protein domain classification resource that exploits an automated workflow of structure and sequence comparison alongside expert manual curation to construct a hierarchical classification of evolutionary and structural relationships. The aim of this study was to develop algorithms for detecting remote homologues missed by state-of-the-art HMM-based approaches. The method developed (CATHe) combines a neural network with sequence representations obtained from protein Language Models. It was assessed using a dataset of remote homologues having less than 20% sequence identity to any domain in the training set. RESULTS: The CATHe models trained on 1773 largest and 50 largest CATH superfamilies had an accuracy of 85.6 ± 0.4%, and 98.2 ± 0.3% respectively. As a further test of the power of CATHe to detect more remote homologues missed by HMMs derived from CATH domains, we used a dataset consisting of protein domains that had annotations in Pfam, but not in CATH. By using highly reliable CATHe predictions (expected error rate <0.5%), we were able to provide CATH annotations for 4.62 million Pfam domains. For a subset of these domains from Homo sapiens, we structurally validated 90.86% of the predictions by comparing their corresponding AlphaFold 2 structures with structures from the CATH superfamilies to which they were assigned. AVAILABILITY AND IMPLEMENTATION: The code for the developed models can be found on https://github.com/vam-sin/CATHe. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online
    corecore