23 research outputs found

    Rf Power Amplifier And Oscillator Design For Reliability And Variability

    Get PDF
    CMOS RF circuit design has been an ever-lasting research field. It gained so much attention since RF circuits have high mobility and wide band efficiency, while CMOS technology has the advantage of low cost and better capability of integration. At the same time, IC circuits never stopped scaling down for the recent many decades. Reliability issues with RF circuits have become more and more severe with device scaling down: reliability effects such as gate oxide break down, hot carrier injection, negative bias temperature instability, have been amplified as the device size shrinks. Process variability issues also become more predominant as the feature size decreases. With these insights provided, reliability and variability evaluations on typical RF circuits and possible compensation techniques are highly desirable. In this work, a class E power amplifier is designed and laid out using TSMC 0.18 µm RF technology and the chip was fabricated. Oxide stress and hot electron tests were carried out at elevated supply voltage, fresh measurement results were compared with different stress conditions after 10 hours. Test results matched very well with mixed mode circuit simulations, proved that hot carrier effects degrades PA performances like output power, power efficiency, etc. Self- heating effects were examined on a class AB power amplifier since PA has high power operations. Device temperature simulation was done both in DC and mixed mode level. Different gate biasing techniques were analyzed and their abilities to compensate output power were compared. A simple gate biasing circuit turned out to be efficient to compensate selfheating effects under different localized heating situations. iv Process variation was studied on a classic Colpitts oscillator using Monte-Carlo simulation. Phase noise was examined since it is a key parameter in oscillator. Phase noise was modeled using analytical equations and supported by good match between MATLAB results and ADS simulation. An adaptive body biasing circuit was proposed to eliminate process variation. Results from probability density function simulation demonstrated its capability to relieve process variation on phase noise. Standard deviation of phase noise with adaptive body bias is much less than the one without compensation. Finally, a robust, adaptive design technique using PLL as on-chip sensor to reduce Process, Voltage, Temperature (P.V.T.) variations and other aging effects on RF PA was evaluated. The frequency and phase of ring oscillator need to be adjusted to follow the frequency and phase of input in PLL no matter how the working condition varies. As a result, the control signal of ring oscillator has to fluctuate according to the working condition, reflecting the P.V.T changes. RF circuits suffer from similar P.V.T. variations. The control signal of PLL is introduced to RF circuits and converted to the adaptive tuning voltage for substrate bias. Simulation results illustrate that the PA output power under different variations is more flat than the one with no compensation. Analytical equations show good support to what has been observed

    Deep in-memory computing

    Get PDF
    There is much interest in embedding data analytics into sensor-rich platforms such as wearables, biomedical devices, autonomous vehicles, robots, and Internet-of-Things to provide these with decision-making capabilities. Such platforms often need to implement machine learning (ML) algorithms under stringent energy constraints with battery-powered electronics. Especially, energy consumption in memory subsystems dominates such a system's energy efficiency. In addition, the memory access latency is a major bottleneck for overall system throughput. To address these issues in memory-intensive inference applications, this dissertation proposes deep in-memory accelerator (DIMA), which deeply embeds computation into the memory array, employing two key principles: (1) accessing and processing multiple rows of memory array at a time, and (2) embedding pitch-matched low-swing analog processing at the periphery of bitcell array. The signal-to-noise ratio (SNR) is budgeted by employing low-swing operations in both memory read and processing to exploit the application level's error immunity for aggressive energy efficiency. This dissertation first describes the system rationale underlying the DIMA's processing stages by identifying the common functional flow across a diverse set of inference algorithms. Based on the analysis, this dissertation presents a multi-functional DIMA to support four algorithms: support vector machine (SVM), template matching (TM), k-nearest neighbor (k-NN), and matched filter. The circuit and architectural level design techniques and guidelines are provided to address the challenges in achieving multi-functionality. A prototype integrated circuit (IC) of a multi-functional DIMA was fabricated with a 16 KB SRAM array in a 65 nm CMOS process. Measurement results show up to 5.6X and 5.8X energy and delay reductions leading to 31X energy delay product (EDP) reduction with negligible (<1%) accuracy degradation as compared to the conventional 8-b fixed-point digital implementation optimally designed for each algorithm. Then, DIMA also has been applied to more complex algorithms: (1) convolutional neural network (CNN), (2) sparse distributed memory (SDM), and (3) random forest (RF). System-level simulations of CNN using circuit behavioral models in a 45 nm SOI CMOS demonstrate that high probability (>0.99) of handwritten digit recognition can be achieved using the MNIST database, along with a 24.5X reduced EDP, a 5.0X reduced energy, and a 4.9X higher throughput as compared to the conventional system. The DIMA-based SDM architecture also achieves up to 25X and 12X delay and energy reductions, respectively, over conventional SDM with negligible accuracy degradation (within 0.4%) for 16X16 binary-pixel image classification. A DIMA-based RF was realized as a prototype IC with a 16 KB SRAM array in a 65 nm process. To the best of our knowledge, this is the first IC realization of an RF algorithm. The measurement results show that the prototype achieves a 6.8X lower EDP compared to a conventional design at the same accuracy (94%) for an eight-class traffic sign recognition problem. The multi-functional DIMA and extension to other algorithms naturally motivated us to consider a programmable DIMA instruction set architecture (ISA), namely MATI. This dissertation explores a synergistic combination of the instruction set, architecture and circuit design to achieve the programmability without losing DIMA's energy and throughput benefits. Employing silicon-validated energy, delay and behavioral models of deep in-memory components, we demonstrate that MATI is able to realize nine ML benchmarks while incurring negligible overhead in energy (< 0.1%), and area (4.5%), and in throughput, over a fixed four-function DIMA. In this process, MATI is able to simultaneously achieve enhancements in both energy (2.5X to 5.5X) and throughput (1.4X to 3.4X) for an overall EDP improvement of up to 12.6X over fixed-function digital architectures

    GigaHertz Symposium 2010

    Get PDF

    Hardware Considerations for Signal Processing Systems: A Step Toward the Unconventional.

    Full text link
    As we progress into the future, signal processing algorithms are becoming more computationally intensive and power hungry while the desire for mobile products and low power devices is also increasing. An integrated ASIC solution is one of the primary ways chip developers can improve performance and add functionality while keeping the power budget low. This work discusses ASIC hardware for both conventional and unconventional signal processing systems, and how integration, error resilience, emerging devices, and new algorithms can be leveraged by signal processing systems to further improve performance and enable new applications. Specifically this work presents three case studies: 1) a conventional and highly parallel mix signal cross-correlator ASIC for a weather satellite performing real-time synthetic aperture imaging, 2) an unconventional native stochastic computing architecture enabled by memristors, and 3) two unconventional sparse neural network ASICs for feature extraction and object classification. As improvements from technology scaling alone slow down, and the demand for energy efficient mobile electronics increases, such optimization techniques at the device, circuit, and system level will become more critical to advance signal processing capabilities in the future.PhDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116685/1/knagphil_1.pd

    Optimização dinâmica da tensão de alimentação e da frequência de operação em sistemas electrónicos digitais

    Get PDF
    À medida que a tecnologia de circuitos integrados CMOS é exposta à miniaturização, surgem diversos problemas no que diz respeito à fiabilidade e performance. Efeitos tais como o BTI (Bias Thermal Instability), TDDB (Time Dependent Dielectric Breakdown), HCI (Hot Carrier Injection), EM (Electromigration) degradam os parâmetros físicos dos transístores CMOS e por sua vez alteram as propriedades eléctricas dos mesmos ao longo do tempo. Esta deterioração é chamada de envelhecimento e estes efeitos são cumulativos e têm um grande impacto na performance do circuito, especialmente se ocorrerem outras variações paramétricas, como as variações de processo, temperatura e tensão de alimentação. Estas variações são conhecidas por variações PVTA (variações no Processo de Fabricação do circuito integrado [P], na Tensão de Alimentação [V], na Temperatura [T] e variações provocadas pelo Envelhecimento dos circuitos [A]) e podem desencadear erros de sincronismo durante a vida do produto (circuito integrado digital). O trabalho apresentado nesta dissertação tem por objectivo primordial o desenvolvimento de um sistema que optimize a operação ao longo da vida de circuitos integrados digitais síncronos de forma dinâmica. Este sistema permite que os circuitos sejam optimizados de acordo com as suas necessidades: (i) Diminuir a dissipação de potência, por reduzir a tensão de alimentação para o valor mais baixo que garante a operação sem erros; ou (ii) Aumentar o desempenho/performance, por aumentar a frequência de operação até ao limite máximo no qual não ocorrem erros. A optimização dinâmica da operação ao longo da vida de circuitos integrados digitais síncronos é alcançada através de um controlador, um bloco de sensores globais e por vários sensores locais localizados em determinados flip-flops do circuito. A nova solução tem como objectivo utilizar os dois tipos de sensores atrás mencionados, globais e locais, para possibilitar a previsão de erros de performance de uma forma mais eficaz, que possibilite a activação de mecanismos que impeçam a ocorrência de erros durante o tempo de vida útil de um circuito, e dessa forma permitindo optimizar constantemente o seu funcionamento. Assim é exequível desenvolver circuitos que operem no limite das suas capacidades temporais, sem falhas, e com a utilização de margens de erro pequenas para admitir as variações de performance provocadas por variações no processo de fabrico, na tensão de alimentação, na temperatura ou o envelhecimento. Foi também desenvolvido um sistema de controlo que permite, depois da detecção de um potencial erro, desencadear um processo para diminuir a frequência do sinal de relógio do sistema, ou aumentar a tensão de alimentação, evitando que o erro ocorra. Apesar de existirem outras técnicas de controlo dinâmico da operação de circuitos integrados tais como DVS (Dynamic Voltage Scaling), de DFS (Dynamic Frequency Scaling), ou ambas (DVFS – Dynamic Voltage and Frequency Scaling), estas técnicas ou são de muito complexa implementação, ou apresentam margens de segurança elevadas, levando a soluções em que a operação do circuito não está optimizada. A solução desenvolvida neste trabalho, em que se utilizam sensores preditivos locais e globais os quais são sensíveis ao envelhecimento a longo prazo ocorrido nos circuitos, constitui uma novidade no estado da técnica relativamente ao controlo de sistemas de DVS e/ou DFS. Outro aspecto importante é que neste trabalho desenvolveu-se um método de ajuste da tensão de alimentação ou da frequência, o qual é sensível ao envelhecimento a longo prazo dos circuitos, utilizando sensores locais e globais. O controlador permite a optimização da performance dos circuitos através do aumento da frequência de operação até ao limite máximo que ainda evita a ocorrência de erros e a optimização de consumo de energia através da redução da tensão de alimentação (VDD) para o valor mínimo que ainda previne a ocorrência de erros. Através de uma análise de previsão de envelhecimento, são identificados os caminhos críticos, bem como os caminhos que envelhecem mais rápido e que se tornarão críticos com o envelhecimento do circuito. Uma vez identificados os caminhos críticos, irão ser inserido os sensores locais através da substituição dos flip-flops que terminam os caminhos críticos identificados por novos flip-flops que incluem sensores de performance e de envelhecimento. É de referenciar que estes sensores são preditivos, ou seja, que sinalizam precocemente os erros de performance, antes de eles ocorrerem nos flip-flops que capturam os caminhos críticos. A arquitectura dos sensores propostos é tal que as variações PVTA que ocorrem sobre eles fazem aumentar a sua capacidade de prever o erro, ou seja, os sensores vão-se adaptando ao longo da sua vida útil para aumentarem a sua sensibilidade. Os sensores locais têm como função realizar a calibração dos sensores globais, bem como realizar a monitorização constante dos atrasos nos caminhos mais longos do circuito, sempre que estes são activados. A função dos sensores globais é a realização da monitorização periódica ou quando solicitado dos atrasos no circuito digital. Ambos os tipos de sensores, os sensores globais como os locais podem desencadear ajustes na frequência ou na tensão de alimentação. Os sensores globais são compostos por uma unidade de controlo do sensor global, que recebe ordens do controlador do sistema para iniciar a análise ao desempenho do circuito e gera os sinais de controlo para a operação de análise global do desempenho e por duas cadeias de portas (uma com portas NOR e outra com portas NAND), com tempos de propagação superiores aos caminhos críticos que se esperam vir a ter no circuito durante a sua vida útil. Ambos os caminhos irão, presumivelmente, envelhecer mais que os caminhos críticos do circuito quando sujeitos ao efeito BTI (que influencia fortemente a degradação do Vth dos transístores [NBTI/NORs e PBTI/NANDs]). Ao longo das duas cadeias, diversos sinais à saída de algumas portas NOR e NAND são ligados a células de sensores globais, criando diversos caminhos fictícios com diferentes tempos de propagação. As saídas dos sensores das duas cadeias formam duas saídas de dados do sensor global. A fim de se alcançar a optimização do desempenho do circuito, são realizados testes de calibração dos sensores, onde são estimulados alguns caminhos críticos no circuito (através de um teste determinístico) e, simultaneamente é realizada a análise do desempenho pela unidade de sensores globais. Este procedimento, permite definir o limite máximo (mínimo) para frequência (tensão de alimentação) sem que os sensores locais sejam sinalizados. Esta informação da frequência (tensão) é guardada num registo do controlador (registo V/F) e corresponde à frequência (tensão) normal de funcionamento. Este teste também permite determinar quais os caminhos fictícios nas duas cadeias que apresentam tempos de propagação semelhantes aos caminhos críticos do circuito. Esta informação também é guardada em dois registos no controlador do sistema (registos GSOsafe), que indicam o estado das saídas dos controladores globais para a operação optimizada do circuito. Durante a vida útil do circuito, o controlador do sistema de optimização procede ao ajuste automático da frequência (ou da tensão de alimentação) do circuito, caso o controlador dos sensores globais detecte uma alteração em relação à operação correcta em memória, alterando o conteúdo do registo que guarda a frequência (tensão) de trabalho. Se por ventura ocorrer a sinalização de um sensor local e não existir nenhuma sinalização para alteração do desempenho pelos sensores globais, quer dizer que o circuito pode ter envelhecido mais que os caminhos fictícios dos sensores globais, pelo que a frequência (tensão de alimentação) de funcionamento deve ser alterada, mas também deve existir uma actualização nos registos que guardam a saída correcta dos sensores globais. É de salientar que, se os caminhos fictícios envelhecem mais do que o circuito, as margens de segurança (time slack) existentes vão sendo aumentadas ao longo da vida do circuito, tratando-se de uma segurança positiva. Mas, se existir a possibilidade do envelhecimento ser maior nos caminhos do circuito, a existência dos sensores locais a monitorizar a todo o tempo o desempenho do circuito, garantem que o sistema pode aprender com as sinalizações e adaptar-se às novas condições de operação ao longo da vida útil do circuito. Enquanto a monitorização efectuada pelo bloco de sensores globais fornece uma avaliação grosseira do estado de funcionamento do circuito, a monitorização efectuada pelos sensores locais, quando activados, fornece uma avaliação fina sobre qual a performance do circuito para que não ocorram erros funcionais. As novidades apresentadas neste trabalho são no mecanismo de controlo que permite a optimização dinâmica da tensão ou da frequência, e na arquitectura e funcionamento do sensor global a inserir no circuito. No que diz respeito ao mecanismo de controlo do sistema de optimização dinâmica, as novidades são: (i) na utilização conjunta de sensores locais e globais para garantir níveis de optimização elevados, (ii) na utilização de sensores preditivos (globais e locais) que previnem os erros de ocorrerem e (iii) na utilização de sensores sensíveis ao envelhecimento do circuito ao longo da sua vida útil. Em relação ao sensor global para monitorização de variações PVTA a novidade consiste (iv), na apresentação de sensores para a degradação nos transístores PMOS e de sensores para a degradação nos transístores NMOS. Este método de optimização e as topologias apresentadas podem ser desenvolvidas e utilizadas com outros tipos de flip-flops, ou empregando outros tipos de sensores, ou outros caminhos fictícios nos sensores globais, sem prejuízo do método global de optimização que conjuga os dois tipos de sensores, globais e locais, para optimizar a tensão de alimentação e a frequência de operação. É proposta uma nova arquitectura para um flip-flop com correcção de erros de atraso (DFC-FF / AEPDFC-FF) com e sem previsão de erros adaptativa para realizar a correcção/monitorização e correcção on-line da perda de performance a longo prazo de sistemas digitais CMOS, independentemente da sua causa. O DFC-FF integra um FF do tipo TG-MSFF (Transmission Gate Master Slave Flip-Flop) e um sensor de correcção de erros (CES) dos quais são apresentados duas propostas. O AEPDFC-FF é composto por DFC-FF e um sensor de envelhecimento. A variabilidade tornou-se na principal causa de falha dos circuitos digitais quando a tecnologia evoluiu para as escalas nanométricas. As reduzidas dimensões físicas dos novos transístores e o aumento na complexidade dos circuitos integrados tornou os novos circuitos mais susceptíveis a variações no processo de fabrico, nas condições de operação e operacionais, tendo como consequência o fabrico de dispositivos mais frágeis, com maior probabilidade de falharem nos primeiros meses de vida, e com tempos de vida útil esperados inferiores aos das tecnologias anteriores. Face a outras propostas, uma das principais vantagens do DFC-FF é que a a perda de performance do próprio sensor melhora a sua capacidade de correcção de erros. Os efeitos do envelhecimento, do aumento de temperatura e da diminuição na tensão de alimentação (VTA), aumentam a janela de correcção, permitindo que o DFC-FF possa estar sempre ligado sem comprometer o seu funcionamento. O conceito, estudado e desenvolvido em tecnologia de 65nm, pode ser transportado posteriormente para nanotecnologias mais recentes, usando MOSFETs de menor dimensão, uma vez que a arquitectura do sensor é transversal a toda a tecnologia CMOS.Universidade do Algarve, Instituto Superior de Engenhari

    Characterisation and modelling of graphene FET detectors for flexible terahertz electronics

    Get PDF
    Low-cost electronics for future high-speed wireless communication and non-invasive inspection at terahertz frequencies require new materials with advanced mechanical and electronic properties. Graphene, with its unique combination of flexibility and high carrier velocity, can provide new opportunities for terahertz electronics. In particular, several types of power sensors based on graphene have been demonstrated and found suitable as fast and sensitive detectors over a wide part of the electromagnetic spectrum. Nevertheless, the underlying physics for signal detection are not well understood due to the lack of accurate characterisation methods, which hampers further improvement and optimisation of graphene-based power sensors. In this thesis, progress on modelling, design, fabrication and characterisation of terahertz graphene field-effect transistor (GFET) detectors is presented. Amajor part is devoted to the first steps towards flexible terahertz electronics.The characterisation and modelling of terahertz GFET detectors from 1 GHz to 1.1 THz are presented. The bias dependence, the scattering parameters and the detector voltage response were simultaneously accessed. It is shown that the voltage responsivity can be accurately described using a combination of a quasi-static equivalent circuit model, and the second-order series expansion terms of the nonlinear dc I-V characteristic. The videobandwidth, or IF bandwidth, of GFET detectors is estimated from heterodyne measurements. Moreover, the low-frequency noise of GFET detectors between 1 Hz and 1 MHz is investigated. From this, the room-temperature Hooge parameter of fabricated GFETs is extracted to be around 2*10^{-3}. It is found that the thermal noise dominates above 100 Hz, which sets the necessary switching time to reduce the effect of 1/f noise.A state-of-the-art GFET detector at 400 GHz, with a maximum measured optical responsivity of 74 V/W, and a minimum noise-equivalent power of 130 pW/Hz^{0.5} is demonstrated. It is shown that the detector performance is affected by the quality of the graphene film and adjacent layers, hence indicating the need to improve the fabrication process of GFETs.As a proof of concept, a bendable GFET terahertz detector on a plastic substrate is demonstrated. The effects of bending strain on dc I-V characteristics, responsivity and sensitivity are investigated. The detector exhibits a robust performance for tensile strain of more than 1% corresponding to a bending radius of 7 mm. Finally, a linear array of terahertz GFET detectors on a flexible substrate for imaging applications is fabricated and tested. The results show the possibility of realising bendable and curved focal plane arrays.In summary, in this work, the combination of improved device models and more accurate characterisation techniques of terahertz GFET detectors will allow for further optimisation. It is shown that graphene can open up for flexible terahertz electronics for future niche applications, such as wearable smart electronics and curved focal plane imaging

    Design of a CMOS power amplifier and built-in sensors for variability monitoring and compensation

    Get PDF
    This research thesis aims to develop a system composed by a a CMOS power amplifier and built-in sensors for variability monitoring and compensation. The integration of monitoring systems with high frequency analog circuits is commonly used for performance optimization and control. In addition, built-in sensors are used in quality testing, improving the yield by detecting circuit faults during the fabrication of these. Typically, most of the built-in sensors are electrically connected to a node of the circuit under test, affecting its performance. In tuned power amplifers, for instance, a small load variation can cause a degradation of its output power and effciency. Hence, the integration between the circuit under test and the monitoring block should be carefully designed. These loading effects can be avoided using non-invasive solutions such as temperature sensors. An integrated circuit composed by a CMOS power amplifer, two amplitude detectors and a temperature sensor is implemented in this work. The degradation of the power amplifier performance due to variability effects is accelerated by increasing its supply voltage. A feedback loop is added to control and adjust the system operation, stress the amplifier and accelerate its degradation, monitor the amplifier performance using the sensors and compensate the observed degradation. The design of each one of the main parts of the system is presented through this work, explaining their theoretical basis and validating their operation with simulations results. Finally, all the parts are integrated together, and a feedback loop with a control algorithm is proposed to monitor and compensate the DUT variability effects

    RF Circuit Designs for Reliability and Process Variability Resilience

    Get PDF
    Complementary metal oxide semiconductor (CMOS) radio frequency (RF) circuit design has been an ever-lasting research field. It has gained so much attention since RF circuits offer high mobility and wide-band efficiency, while CMOS technology provides the advantage of low cost and high integration capability. At the same time, CMOS device size continues to scale to the nanometer regime. Reliability issues with RF circuits have become more challenging than ever before. Reliability mechanisms, such as gate oxide breakdown, hot carrier injection, negative bias temperature instability, have been amplified as the device size shrinks. In addition, process variability becomes a new design paradigm in modern RF circuits. In this Ph.D. work, a class F power amplifier (PA) was designed and analyzed using TSMC 180nm process technology. Its pre-layout and post-layout performances were compared. Post-layout parasitic effect decreases the output power and power-added efficiency. Physical insight of hot electron impact ionization and device self-heating was examined using the mixed-mode device and circuit simulation to mimic the circuit operating environment. Hot electron effect increases the threshold voltage and decreases the electron mobility of an n-channel transistor, which in turn decreases the output power and power-added efficiency of the power amplifier, as evidenced by the RF circuit simulation results. The device self-heating also reduces the output power and power-added efficiency of the PA. The process, voltage, and temperature (PVT) effects on a class AB power amplifier were studied. A PVT compensation technique using a current-source as an on-chip sensor was developed. The adaptive body bias design with the current sensing technique makes the output power and power-added efficiency much less sensitive to process variability, supply voltage variation, and temperature fluctuation, predicted by our derived analytical equations which are also verified by Agilent Advanced Design System (ADS) circuit simulation. Process variations and hot electron reliability on the mixer performance were also evaluated using different process corner models. The conversion gain and noise figure were modeled using analytical equations, supported by ADS circuit simulation results. A process invariant current source circuit was developed to eliminate process variation effect on circuit performance. Our conversion gain, noise figure, and output power show robust performance against PVT variations compared to those of a traditional design without using the current sensor, as evidenced by Monte Carlo statistical simulation. Finally, semiconductor process variations and hot electron reliability on the LC-voltage controlled oscillator (VCO) performance was evaluated using different process models. In our newly designed VCO, the phase noise and power consumptions are resilient against process variation effect due to the use of on-chip current sensing and compensation. Our Monte-Carlo simulation and analysis demonstrate that the standard deviation of phase noise in the new VCO design reduces about five times than that of the conventional design

    A portable metabolomics-on-CMOS platform for point-of-care testing

    Get PDF
    Metabolomics is the study of the metabolites, small molecules produced during the metabolism. Metabolite levels mirror the health status of an individual and therefore have enormous potential in medical point-of-care (POC) applications. POC platforms are miniaturised and portable systems integrating all steps from sample collection to result of a medical test. POC devices offer the possibility to reduce the diagnostic costs, shorten the testing time, and, ultimately, save lives for several applications. The glucose meter, arguably the most successful example of metabolomics POC platform, has already demonstrated the dramatic impact that such platforms can have on the society. Nevertheless, other relevant metabolomic tests are still relegated to centralised laboratories and bulky equipment. In this work, a metabolomics POC platform for multi-metabolite quantification was developed. The platform aims to untap metabolomics for the general population. As case studies, the platform was designed and evaluated for prostate cancer and ischemic stroke. For prostate cancer, new affordable diagnostic tools to be used in conjunction with the current clinical standard have are needed to reduce the medical costs due to overdiagnosis and increase the survival rate. Thus, a novel potential metabolic test based on L-type amino acids (LAA) profile, glutamate, choline, and sarcosine blood concentrations was developed. For ischemic stroke, where the portable and rapid test can make a difference between life and death, lactate and creatinine blood levels were chosen as potential biomarkers. All the target metabolites were quantified using an optical method (colorimetry). The platform is composed of three units: the cartridge, the reader, and the graphical user interface (GUI). The cartridge is the core of the platform. It integrates a CMOS 16x16 array of photodiodes, capillary microfluidics, and biological receptors onto the same ceramic package. To measure multiple metabolites, a novel method involving a combination of replica moulding and injection moulding was developed for the monolithic integration of microfluidics onto integrated chips. The reader is composed of a custom PCB and a microcontroller board. It is used for addressing, data digitisation and data transfer to the GUI. The GUI - a software running on a portable electronic device - is used for interfacing the system, visualise, acquire, process, and store the data. The analysis of the microfluidic structures showed successful integration. The selection of the specific chemistry for detecting the analytes of interest was demonstrated to be suitable for the performance of the sensors. Quick and reliably capillary flow of human plasma, serum and blood was demonstrated. On-chip quantification of the target metabolites was demonstrated in diluted human serum and human plasma. Calibration curves, kinetics parameter and other relevant metrics were determined. For all the metabolites, the limits of detection were lower than the physiological range, demonstrating the capability of the platform to be used in the target applications. Multi-metabolite testing capability was also demonstrated using commercially and clinically sourced human plasma. For multiplexed assays, reagents were preloaded in the microfluidic channel and lyophilised. Lyophilisation also improved the shelf-life of the reagents. Alternative configurations, involving the use of paper microfluidics, integration of passive blood filter and use of whole blood, were investigated. The chracterisation of the platform culminated with a clinical evaluation for both the target applications. The same platform with minimal modification of the cartridge was able to provide clinically relevant information for both the distinct applications, highlighting the versatility of the platform for POC determination of metabolic biomarkers. For prostate cancer, the platform was used for the quantification of the potential metabolic biomarker in 10 healthy samples and 16 patients affected by prostate cancer. LAA, glutamate and choline average concentrations were elevated in the cancer group with respect to the control group and were therefore regarded as metabolic biomarkers in this population. Metabolomic profiles were used to train a classifier algorithm, which improved the performance of the current clinical blood test, for this population. For ischemic stroke, lactate determination was performed in clinically sourced samples. Clinical evaluation for ischemic stroke was performed using 10 samples from people diagnosed with ischemic stroke. Results showed that the developed platform provided comparable results with an NHS-based gold standard method in this population. This comparison demonstrated the potential of the platform for its on-the-spot use. The developed platform has the potential to lead the way to a new generation of low-cost and rapid POC devices for the early and improved diagnosis of deadly diseases
    corecore