611 research outputs found

    Architecture and Circuit Design Optimization for Compute-In-Memory

    Get PDF
    The objective of the proposed research is to optimize computing-in-memory (CIM) design for accelerating Deep Neural Network (DNN) algorithms. As compute peripheries such as analog-to-digital converter (ADC) introduce significant overhead in CIM inference design, the research first focuses on the circuit optimization for inference acceleration and proposes a resistive random access memory (RRAM) based ADC-free in-memory compute scheme. We comprehensively explore the trade-offs involving different types of ADCs and investigate a new ADC design especially suited for the CIM, which performs the analog shift-add for multiple weight significance bits, improving the throughput and energy efficiency under similar area constraints. Furthermore, we prototype an ADC-free CIM inference chip design with a fully-analog data processing manner between sub-arrays, which can significantly improve the hardware performance over the conventional CIM designs and achieve near-software classification accuracy on ImageNet and CIFAR-10/-100 dataset. Secondly, the research focuses on hardware support for CIM on-chip training. To maximize hardware reuse of CIM weight stationary dataflow, we propose the CIM training architectures with the transpose weight mapping strategy. The cell design and periphery circuitry are modified to efficiently support bi-directional compute. A novel solution of signed number multiplication is also proposed to handle the negative input in backpropagation. Finally, we propose an SRAM-based CIM training architecture and comprehensively explore the system-level hardware performance for DNN on-chip training based on silicon measurement results.Ph.D

    Optimization of neural networks for deep learning and applications to CT image segmentation

    Full text link
    [eng] During the last few years, AI development in deep learning has been going so fast that even important researchers, politicians, and entrepreneurs are signing petitions to try to slow it down. The newest methods for natural language processing and image generation are achieving results so unbelievable that people are seriously starting to think they can be dangerous for society. In reality, they are not dangerous (at the moment) even if we have to admit we reached a point where we have no more control over the flux of data inside the deep networks. It is impossible to open a modern deep neural network and interpret how it processes the information and, in many cases, explain how or why it gives back that particular result. One of the goals of this doctoral work has been to study the behavior of weights in convolutional neural networks and in transformers. We hereby present a work that demonstrates how to invert 3x3 convolutions after training a neural network able to learn how to classify images, with the future aim of having precisely invertible convolutional neural networks. We demonstrate that a simple network can learn to classify images on an open-source dataset without loss in accuracy, with respect to a non-invertible one. All that with the ability to reconstruct the original image without detectable error (on 8-bit images) in up to 20 convolutions stacked in a row. We present a thorough comparison between our method and the standard. We tested the performances of the five most used transformers for image classification on an open- source dataset. Studying the embedded matrices, we have been able to provide two criteria that can help transformers learn with a training time reduction of up to 30% and with no impact on classification accuracy. The evolution of deep learning techniques is also touching the field of digital health. With tens of thousands of new start-ups and more than 1B $ of investments only in the last year, this field is growing rapidly and promising to revolutionize healthcare. In this thesis, we present several neural networks for the segmentation of lungs, lung nodules, and areas affected by pneumonia induced by COVID-19, in chest CT scans. The architecturesm we used are all residual convolutional neural networks inspired by UNet and Inception. We customized them with novel loss functions and layers studied to achieve high performances on these particular applications. The errors on the surface of nodule segmentation masks are not over 1mm in more than 99% of the cases. Our algorithm for COVID-19 lesion detection has a specificity of 100% and overall accuracy of 97.1%. In general, it surpasses the state-of-the-art in all the considered statistics, using UNet as a benchmark. Combining these with other algorithms able to detect and predict lung cancer, the whole work was presented in a European innovation program and judged of high interest by worldwide experts. With this work, we set the basis for the future development of better AI tools in healthcare and scientific investigation into the fundamentals of deep learning.[spa] Durante los últimos años, el desarrollo de la IA en el aprendizaje profundo ha ido tan rápido que Incluso importantes investigadores, políticos y empresarios están firmando peticiones para intentar para ralentizarlo. Los métodos más nuevos para el procesamiento y la generación de imágenes y lenguaje natural, están logrando resultados tan increíbles que la gente está empezando a preocuparse seriamente. Pienso que pueden ser peligrosos para la sociedad. En realidad, no son peligrosos (al menos de momento) incluso si tenemos que admitir que llegamos a un punto en el que ya no tenemos control sobre el flujo de datos dentro de las redes profundas. Es imposible abrir una moderna red neuronal profunda e interpretar cómo procesa la información y, en muchos casos, explique cómo o por qué devuelve ese resultado en particular, uno de los objetivos de este doctorado. El trabajo ha consistido en estudiar el comportamiento de los pesos en redes neuronales convolucionales y en transformadores. Por la presente presentamos un trabajo que demuestra cómo invertir 3x3 convoluciones después de entrenar una red neuronal capaz de aprender a clasificar imágenes, con el objetivo futuro de tener redes neuronales convolucionales precisamente invertibles. Nosotros queremos demostrar que una red simple puede aprender a clasificar imágenes en un código abierto conjunto de datos sin pérdida de precisión, con respecto a uno no invertible. Todo eso con la capacidad de reconstruir la imagen original sin errores detectables (en imágenes de 8 bits) en hasta 20 convoluciones apiladas en fila. Presentamos una exhaustiva comparación entre nuestro método y el estándar. Probamos las prestaciones de los cinco transformadores más utilizados para la clasificación de imágenes en abierto. conjunto de datos de origen. Al estudiar las matrices incrustadas, hemos sido capaz de proporcionar dos criterios que pueden ayudar a los transformadores a aprender con un tiempo de capacitación reducción de hasta el 30% y sin impacto en la precisión de la clasificación. La evolución de las técnicas de aprendizaje profundo también está afectando al campo de la salud digital. Con decenas de miles de nuevas empresas y más de mil millones de dólares en inversiones sólo en el año pasado, este campo está creciendo rápidamente y promete revolucionar la atención médica. En esta tesis, presentamos varias redes neuronales para la segmentación de pulmones, nódulos pulmonares, y zonas afectadas por neumonía inducida por COVID-19, en tomografías computarizadas de tórax. La arquitectura que utilizamos son todas redes neuronales convolucionales residuales inspiradas en UNet. Las personalizamos con nuevas funciones y capas de pérdida, estudiado para lograr altos rendimientos en estas aplicaciones particulares. Los errores en la superficie de las máscaras de segmentación de los nódulos no supera 1 mm en más del 99% de los casos. Nuestro algoritmo para la detección de lesiones de COVID-19 tiene una especificidad del 100% y en general precisión del 97,1%. En general supera el estado del arte en todos los aspectos considerados, estadísticas, utilizando UNet como punto de referencia. Combinando estos con otros algoritmos capaces de detectar y predecir el cáncer de pulmón, todo el trabajo se presentó en una innovación europea programa y considerado de gran interés por expertos de todo el mundo. Con este trabajo, sentamos las bases para el futuro desarrollo de mejores herramientas de IA en Investigación sanitaria y científica sobre los fundamentos del aprendizaje profundo

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Approximate Computing Survey, Part II: Application-Specific & Architectural Approximation Techniques and Applications

    Full text link
    The challenging deployment of compute-intensive applications from domains such Artificial Intelligence (AI) and Digital Signal Processing (DSP), forces the community of computing systems to explore new design approaches. Approximate Computing appears as an emerging solution, allowing to tune the quality of results in the design of a system in order to improve the energy efficiency and/or performance. This radical paradigm shift has attracted interest from both academia and industry, resulting in significant research on approximation techniques and methodologies at different design layers (from system down to integrated circuits). Motivated by the wide appeal of Approximate Computing over the last 10 years, we conduct a two-part survey to cover key aspects (e.g., terminology and applications) and review the state-of-the art approximation techniques from all layers of the traditional computing stack. In Part II of our survey, we classify and present the technical details of application-specific and architectural approximation techniques, which both target the design of resource-efficient processors/accelerators & systems. Moreover, we present a detailed analysis of the application spectrum of Approximate Computing and discuss open challenges and future directions.Comment: Under Review at ACM Computing Survey

    A Survey on Approximate Multiplier Designs for Energy Efficiency: From Algorithms to Circuits

    Full text link
    Given the stringent requirements of energy efficiency for Internet-of-Things edge devices, approximate multipliers, as a basic component of many processors and accelerators, have been constantly proposed and studied for decades, especially in error-resilient applications. The computation error and energy efficiency largely depend on how and where the approximation is introduced into a design. Thus, this article aims to provide a comprehensive review of the approximation techniques in multiplier designs ranging from algorithms and architectures to circuits. We have implemented representative approximate multiplier designs in each category to understand the impact of the design techniques on accuracy and efficiency. The designs can then be effectively deployed in high-level applications, such as machine learning, to gain energy efficiency at the cost of slight accuracy loss.Comment: 38 pages, 37 figure

    A variational autoencoder application for real-time anomaly detection at CMS

    Get PDF
    Despite providing invaluable data in the field of High Energy Physics, towards higher luminosity runs the Large Hadron Collider (LHC) will face challenges in discovering interesting results through conventional methods used in previous run periods. Among the proposed approaches, the one we focus on in this thesis work – in collaboration with CERN teams, involves the use of a joint variational autoencoder (JointVAE) machine learning model, trained on known physics processes to identify anomalous events that correspond to previously unidentified physics signatures. By doing so, this method does not rely on any specific new physics signatures and can detect anomalous events in an unsupervised manner, complementing the traditional LHC search tactics that rely on model-dependent hypothesis testing. The algorithm produces a list of anomalous events, which experimental collaborations will examine and eventually confirm as new physics phenomena. Furthermore, repetitive event topologies in the dataset can inspire new physics model building and experimental searches. Implementing this algorithm in the trigger system of LHC experiments can detect previously unnoticed anomalous events, thus broadening the discovery potential of the LHC. This thesis presents a method for implementing the JointVAE model, for real-time anomaly detection in the Compact Muon Solenoid (CMS) experiment. Among the challenges of implementing machine learning models in fast applications, such as the trigger system of the LHC experiments, low latency and reduced resource consumption are essential. Therefore, the JointVAE model has been studied for its implementation feasibility in Field-Programmable Gate Arrays (FPGAs), utilizing a tool based on High-Level Synthesis (HLS) named HLS4ML. The tool, combined with the quantization of neural networks, will reduce the model size, latency, and energy consumption

    Application of Business Analytics Approaches to Address Climate-Change-Related Challenges

    Get PDF
    Climate change is an existential threat facing humanity, civilization, and the natural world. It poses many multi-layered challenges that call for enhanced data-driven decision support methods to help inform society of ways to address the deep uncertainty and incomplete knowledge on climate change issues. This research primarily aims to apply management, decision, information, and data science theories and techniques to propose, build, and evaluate novel data-driven methodologies to improve understanding of climate-change-related challenges. Given that we pursue this work in the College of Management, each essay applies one or more of the three distinct business analytics approaches (i.e., descriptive, prescriptive, and predictive analysis) to aid in developing decision support capabilities. Given the rapid growth in data availability, we evaluate important data characteristics for each analysis, focusing on the data source, granularity, volume, structure, and quality. The final analysis consideration is the methods used on the data output to help coalesce the various model outputs into understandable visualizations, tables, and takeaways. We pursue three distinct business analytics challenges. First, we start with a natural language processing analysis to gain insights into the evolving climate change adaptation discussion in the scientific literature. We then create a stochastic network optimization model with recourse to provide coastal decision-makers with a cost-benefit analysis tool to simultaneously assess risks and costs to protect their community against rising seas. Finally, we create a decision support tool for helping organizations reduce greenhouse gas emissions through strategic sustainable energy purchasing. Although the three essays vary on their specific business analysis approaches, they all have a common theme of applying business analytics techniques to analyze, evaluate, visualize, and understand different facets of the climate change threat

    Heterogeneous growth and death of small bacterial populations in microfluidic droplets

    Get PDF
    Antibiotic resistance is a major global health challenge, and there is still much to learn about how antibiotics work to inhibit the growth of bacterial populations. In many real infections, bacteria grow in small populations where stochastic effects can be important, especially because even a single surviving bacterium can lead to regrowth of an infection. Microfluidic droplets offer an opportunity to study this heterogeneity under well-controlled experimental conditions. Creating numerous, monodisperse microenvironments from the same initial bacterial suspension gives multiple micro-experiments which run in parallel, allowing the study of individual bacterial growth and response to stress (for example, antibiotics). This approach results in a rich data set which can be compared with predictions from both deterministic and probabilistic theoretical models, producing insight into the growth dynamics and antibiotic response of small populations, which are often hidden in conventional large-scale experiments. In this thesis I present a study of small populations of bacteria using microfluidic droplets and theoretical modelling. Chapter 1 provides motivation for the study of small bacterial populations and background on ß-lactam antibiotics (the class of antibiotics investigated in Chapters 5{6) and ß-lactamase enzymes. Chapter 2 outlines the experimental methodology and the image analysis procedure. Principally, this involves encapsulating bacteria into picolitre volumes of growth media and imaging using fluorescence and bright field microscopy for 4{7 hours. A Matlab work flow is used to count the number of bacteria in each droplet over the course of an experiment. Chapter 3 explores the heterogeneous growth dynamics by comparing hundreds to thousands of growth trajectories of clonal populations of E. coli. Deterministic and probabilistic models were developed to understand the response of small populations of ß-lactam resistant bacteria to ß-lactam antibiotics, as described in Chapter 4. The effect of stochastic bacterial loading into droplets as well as stochastic growth are compared to the deterministic case in Chapter 5, in which the survival of bacterial populations under a range of antibiotic concentrations, with different initial numbers of bacteria, is explored. These simulations predict a range of concentrations of antibiotic where stochastic effects lead to the survival of a proportion of the population, while a deterministic mean-field theory would predict success of the antibiotic treatment. In Chapter 6 these predictions are tested experimentally and it is found that, in droplets, some populations of E. coli survive at concentrations of ampicillin beyond the bulk MIC determined by equivalent plate reader experiments. Dormant cells are visible in droplets but not in plate reader experiments, and we propose that some of the growth observed in bulk plate reader experiments might be biomass ( lamentous) growth rather than (healthy) division. This implies that bulk experiments may not reveal the whole picture and need to be interpreted with care. Finally, in Chapter 7 a model is used to investigate the possibility of cooperative behaviour in mixtures of resistant and sensitive bacteria in droplets. This explores the extent to which bacteria with no intrinsic resistance could survive exposure to antibiotics when in the presence of bacteria which produce ß -lactamase enzymes, a phenomenon which is of rising ecological interest and clinical concern

    Electronic Devices for the Combination of Electrically Controlled Drug Release, Electrostimulation, and Optogenetic Stimulation for Nerve Tissue Regeneration

    Full text link
    [ES] La capacidad de las células madre para proliferar formando distintas células especializadas les otorga la potencialidad de servir de base para terapias efectivas para patologías cuyo tratamiento era inimaginable hasta hace apenas dos décadas. Sin embargo, esta capacidad se encuentra mediada por estímulos fisiológicos, químicos, y eléctricos, específicos y complejos, que dificultan su traslación a la rutina clínica. Por ello, las células madre representan un campo de estudio en el que se invierten amplios esfuerzos por parte de la comunidad científica. En el ámbito de la regeneración nerviosa, para modular su desarrollo y diferenciación el tratamiento farmacológico, la electroestimulación, y la estimulación optogenética son técnicas que están consiguiendo prometedores resultados. Es por ello por lo que en la presente tesis se ha desarrollado un conjunto de sistemas electrónicos para permitir la aplicación combinada de estas técnicas in vitro, con perspectiva a su aplicación in vivo. Hemos diseñado una novedosa tecnología para la liberación eléctricamente controlada de fármacos. Esta tecnología está basada en nanopartículas de sílice mesoporosa y puertas moleculares de bipiridina-heparina. Las puertas moleculares son electroquímicamente reactivas, y encierran los fármacos en el interior de las nanopartículas, liberándolos ante un estímulo eléctrico. Hemos caracterizado esta tecnología, y la hemos validado mediante la liberación controlada de rodamina en cultivos celulares de HeLa. Para la combinación de liberación controlada de fármacos y electroestimulación hemos desarrollado dispositivos que permiten aplicar los estímulos eléctricos de forma configurable desde una interfaz gráfica de usuario. Además, hemos diseñado un módulo de expansión que permite multiplexar las señales eléctricas a diferentes cultivos celulares. Además, hemos diseñado un dispositivo de estimulación optogenética. Este tipo de estimulación consiste en la modificación genética de las células para que sean sensibles a la radiación lumínica de determinada longitud de onda. En el ámbito de la regeneración de tejido mediante células precursoras neurales, es de interés poder inducir ondas de calcio, favoreciendo su diferenciación en neuronas y la formación de circuitos sinápticos. El dispositivo diseñado permite obtener imágenes en tiempo real mediante microscopía confocal de las respuestas transitorias de las células al ser irradiadas. El dispositivo se ha validado irradiando neuronas modificadas con luz pulsada de 100 ms. También hemos diseñado un dispositivo electrónico complementario de medida de irradiancia con el doble fin de permitir la calibración del equipo de irradiancia y medir la irradiancia en tiempo real durante los experimentos in vitro. Los resultados del uso de los bioactuadores en procesos complejos y dinámicos, como la regeneración de tejido nervioso, son limitados en lazo abierto. Uno de los principales aspectos analizados es el desarrollo de biosensores que permitiesen la cuantización de ciertas biomoléculas para ajustar la estimulación suministrada en tiempo real. Por ejemplo, la segregación de serotonina es una respuesta identificada en la elongación de células precursoras neurales, pero hay otras biomoléculas de interés para la implementación de un control en lazo cerrado. Entre las tecnologías en el estado del arte, los biosensores basados en transistores de efecto de campo (FET) funcionalizados con aptámeros son realmente prometedores para esta aplicación. Sin embargo, esta tecnología no permitía la medición simultánea de más de una biomolécula objetivo en un volumen reducido debido a las interferencias entre los distintos FETs, cuyos terminales se encuentran inmersos en la solución. Por ello, hemos desarrollado instrumentación electrónica capaz de medir simultáneamente varios de estos biosensores, y la hemos validado mediante la medición simultánea de pH y la detección preliminar de serotonina y glutamato.[CA] La capacitat de les cèl·lules mare per a proliferar formant diferents cèl·lules especialitzades els atorga la potencialitat de servir de base per a teràpies efectives per a patologies el tractament de les quals era inimaginable fins fa a penes dues dècades. No obstant això, aquesta capacitat es troba mediada per estímuls fisiològics, químics, i elèctrics, específics i complexos, que dificulten la seua translació a la rutina clínica. Per això, les cèl·lules mare representen un camp d'estudi en el qual s'inverteixen amplis esforços per part de la comunitat científica. En l'àmbit de la regeneració nerviosa, per a modular el seu desenvolupament i diferenciació el tractament farmacològic, l'electroestimulació, i l'estimulació optogenética són tècniques que estan aconseguint prometedors resultats. És per això que en la present tesi s'ha desenvolupat un conjunt de sistemes electrònics per a permetre l'aplicació combinada d'aquestes tècniques in vitro, amb perspectiva a la seua aplicació in vivo. Hem dissenyat una nova tecnologia per a l'alliberament elèctricament controlat de fàrmacs. Aquesta tecnologia està basada en nanopartícules de sílice mesoporosa i portes moleculars de bipiridina-heparina. Les portes moleculars són electroquímicament reactives, i tanquen els fàrmacs a l'interior de les nanopartícules, alliberant-los davant un estímul elèctric. Hem caracteritzat aquesta tecnologia, i l'hem validada mitjançant l'alliberament controlat de rodamina en cultius cel·lulars de HeLa. Per a la combinació d'alliberament controlat de fàrmacs i electroestimulació hem desenvolupat dispositius que permeten aplicar els estímuls elèctrics de manera configurable des d'una interfície gràfica d'usuari. A més, hem dissenyat un mòdul d'expansió que permet multiplexar els senyals elèctrics a diferents cultius cel·lulars. A més, hem dissenyat un dispositiu d'estimulació optogenètica. Aquest tipus d'estimulació consisteix en la modificació genètica de les cèl·lules perquè siguen sensibles a la radiació lumínica de determinada longitud d'ona. En l'àmbit de la regeneració de teixit mitjançant cèl·lules precursores neurals, és d'interés poder induir ones de calci, afavorint la seua diferenciació en neurones i la formació de circuits sinàptics. El dispositiu dissenyat permet obtindré imatges en temps real mitjançant microscòpia confocal de les respostes transitòries de les cèl·lules en ser irradiades. El dispositiu s'ha validat irradiant neurones modificades amb llum polsada de 100 ms. També hem dissenyat un dispositiu electrònic complementari de mesura d'irradiància amb el doble fi de permetre el calibratge de l'equip d'irradiància i mesurar la irradiància en temps real durant els experiments in vitro. Els resultats de l'ús dels bioactuadors en processos complexos i dinàmics, com la regeneració de teixit nerviós, són limitats en llaç obert. Un dels principals aspectes analitzats és el desenvolupament de biosensors que permeteren la quantització de certes biomolècules per a ajustar l'estimulació subministrada en temps real. Per exemple, la segregació de serotonina és una resposta identificada amb l'elongació de les cèl·lules precursores neurals, però hi ha altres biomolècules d'interés per a la implementació d'un control en llaç tancat. Entre les tecnologies en l'estat de l'art, els biosensors basats en transistors d'efecte de camp (FET) funcionalitzats amb aptàmers són realment prometedors per a aquesta aplicació. No obstant això, aquesta tecnologia no permetia el mesurament simultani de més d'una biomolècula objectiu en un volum reduït a causa de les interferències entre els diferents FETs, els terminals dels quals es troben immersos en la solució. Per això, hem desenvolupat instrumentació electrònica capaç de mesurar simultàniament diversos d'aquests biosensors i els hem validat amb mesurament simultani del pH i la detecció preliminar de serotonina i glutamat.[EN] The stem cells' ability to proliferate to form different specialized cells gives them the potential to serve as the basis for effective therapies for pathologies whose treatment was unimaginable until just two decades ago. However, this capacity is mediated by specific and complex physiological, chemical, and electrical stimuli that complicate their translation to clinical routine. For this reason, stem cells represent a field of study in which the scientific community is investing a great deal of effort. In the field of nerve regeneration, to modulate their development and differentiation, pharmacological treatment, electrostimulation, and optogenetic stimulation are techniques that are achieving promising results. For this reason, we have developed a set of electronic systems to allow the combined application of these techniques in vitro, with a view to their application in vivo. We have designed a novel technology for the electrically controlled release of drugs. This technology is based on mesoporous silica nanoparticles and bipyridine-heparin molecular gates. The molecular gates are electrochemically reactive and entrap the drugs inside the nanoparticles, releasing them upon electrical stimulus. We have characterized this technology and validated it by controlled release of rhodamine in HeLa cell cultures. For combining electrostimulation and controlled drug release we have developed devices that allow applying the different electrical stimuli in a configurable way from a graphical user interface. In addition, we have designed an expansion module that allows multiplexing electrical signals to different cell cultures. In addition, we have designed an optogenetic stimulation device. This type of stimulation consists of genetically modifying cells to make them sensitive to light radiation of a specific wavelength. In tissue regeneration using neural precursor cells, it is interesting to be able to induce calcium waves, favoring the cell differentiation into neurons and the formation of synaptic circuits. The designed device enable the obtention of real-time images through confocal microscopy of the transient responses of cells upon irradiation. The device has been validated by irradiating modified neurons with 100 ms pulsed light stimulation. We have also designed a complementary electronic irradiance measurement device to allow calibration of the irradiator equipment and measuring irradiance in real time during in vitro experiments. The results of using bioactuators in complex and dynamic processes, such as nerve tissue regeneration, are limited in an open loop. One of the main aspects analyzed is the development of biosensors that would allow quantifying of specific biomolecules to adjust the stimulation provided in real time. For instance, serotonin secretion is an identified response of neural precursor cells elongation, among other biomolecules of interest for the implementation of a closed-loop control. Among the state-of-the-art technologies, biosensors based on field effect transistors (FETs) functionalized with aptamers are promising for this application. However, this technology did not allow the simultaneous measurement of more than one target biomolecule in a small volume due to interferences between the different FETs, whose terminals are immersed in the solution. This is why we have developed electronic instrumentation capable of simultaneously measuring several of these biosensors, and we have validated it with the simultaneous pH measurement and the preliminary detection of serotonin and glutamate.Monreal Trigo, J. (2023). Electronic Devices for the Combination of Electrically Controlled Drug Release, Electrostimulation, and Optogenetic Stimulation for Nerve Tissue Regeneration [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/19384

    Modelling, Dimensioning and Optimization of 5G Communication Networks, Resources and Services

    Get PDF
    This reprint aims to collect state-of-the-art research contributions that address challenges in the emerging 5G networks design, dimensioning and optimization. Designing, dimensioning and optimization of communication networks resources and services have been an inseparable part of telecom network development. The latter must convey a large volume of traffic, providing service to traffic streams with highly differentiated requirements in terms of bit-rate and service time, required quality of service and quality of experience parameters. Such a communication infrastructure presents many important challenges, such as the study of necessary multi-layer cooperation, new protocols, performance evaluation of different network parts, low layer network design, network management and security issues, and new technologies in general, which will be discussed in this book
    corecore