10 research outputs found

    A single-chip CMOS pulse oximeter with on-chip lock-in detection

    Get PDF
    Pulse oximetry is a noninvasive and continuous method for monitoring the blood oxygen saturation level. This paper presents the design and testing of a single-chip pulse oximeter fabricated in a 0.35 µm CMOS process. The chip includes photodiode, transimpedance amplifier, analogue band-pass filters, analogue-to-digital converters, digital signal processor and LED timing control. The experimentally measured AC and DC characteristics of individual circuits including the DC output voltage of the transimpedance amplifier, transimpedance gain of the transimpedance amplifier, and the central frequency and bandwidth of the analogue band-pass filters, show a good match (within 1%) with the circuit simulations. With modulated light source and integrated lock-in detection the sensor effectively suppresses the interference from ambient light and 1/f noise. In a breath hold and release experiment the single chip sensor demonstrates consistent and comparable performance to commercial pulse oximetry devices with a mean of 1.2% difference. The single-chip sensor enables a compact and robust design solution that offers a route towards wearable devices for health monitorin

    The inverse electromagnetic problem in high frequency tomography

    No full text
    The aim of the thesis is to solve the inverse electromagnetic problem of High Frequency Tomography. The effort focuses on developing a prototype algorithm reconstructing both the distribution of conductivity and dielectric constant distribution of a body-object from microwave measurements of high frequencies on its surface. For the collection of measurements, an array of electrodes is placed on the surface of the object, a couple of them is triggered by a constant current source and the voltages deployed in the remaining pairs of electrodes are measured. This is repeated by activating all possible pairs (viewing angle). The object is modeled on the computer with the finite element method considering an initial distribution (usually homogeneous) and mimicking the technique of measurement the generalized Laplace equation (forward problem) is solved to calculate the corresponding voltages. The implemented algorithm (MPM) uses the sensitivity weighing factors for determining a better distribution of conductivity. The first objective of the thesis was to extend to high frequencies in the MHz band and in particular to the approximately 10MHz This technique offers σ and εr imaging while retaining the relative simplicity of the low resonance frequency, since the measurements are carried out again through an array of electrodes. The forward problem is described by a quasistatic approach and leads to a generalized Laplace equation for a complex potential involving the complex dielectric constant distribution εr*=εr-jσ/(ωε0) which involves the frequency of the source (ω = 2πf). The second objective of the thesis was the expansion of MPM in Microwave frequencies primarily for ensuring better spatial discretion. The subject under study is surrounded by an array antenna, radiated by the successive activation of each antenna and measuring the electric field in the other antennas. Similarly, the forward problem is governed by the vector Helmholtz equation, which is solved by FEM. The required Jacobian matrix is calculated from a closed expression and the sensitivity is determined by the Adjoint Network technique in conjunction with the reciprocity theorem of electromagnetism. On that basis, the dissertation introduces the MPM at Microwave frequencies and confirms its operation.Στόχος της διατριβής είναι η επίλυση του αντίστροφου ηλεκτρομαγνητικού προβλήματος της Τομογραφίας Υψηλών Συχνοτήτων. Η προσπάθεια επικεντρώνεται στην ανάπτυξη ενός πρωτότυπου αλγορίθμου αναπαραγωγής τόσο της κατανομής αγωγιμότητας όσο και της κατανομής διηλεκτρικής σταθεράς ενός σώματος-αντικειμένου από μετρήσεις υψηλών-μικροκυματικών συχνοτήτων στην επιφάνειά του. Για τη συλλογή των μετρήσεων τοποθετείται μια συστοιχία ηλεκτροδίων στην επιφάνεια του αντικειμένου, ένα ζευγάρι από αυτά ενεργοποιείται από μια πηγή σταθερού ρεύματος και μετρείται το δυναμικό που αναπτύσσεται στα υπόλοιπα ζεύγη ηλεκτροδίων και αυτό επαναλαμβάνεται ενεργοποιώντας όλα τα δυνατά ζεύγη (γωνίες προβολής). Το αντικείμενο μοντελοποιείται στον υπολογιστή με τη μέθοδο πεπερασμένων στοιχείων θεωρώντας μια αρχική σ-κατανομή (συνήθως ομογενή) και μιμούμενοι την τεχνική μετρήσεων επιλύεται η γενικευμένη εξίσωση Laplace (ορθό πρόβλημα) για τον υπολογισμό των αντίστοιχων δυναμικών στα ηλεκτρόδια μέτρησης. O αλγόριθμος που χρησιμοποιείται (MPM) αξιοποιεί τις ευαισθησίες ως συντελεστές βάρους για τον προσδιορισμό μιας βελτιωμένης κατανομής αγωγιμότητας. Ο πρώτος στόχος της διατριβής ήταν η επέκταση στις υψηλές συχνότητες στη ζώνη των MHz και συγκεκριμένα μέχρι τα 10MHz περίπου. Η τεχνική αυτή προσφέρει τόσο σ- όσο και εr-απεικόνιση ενώ διατηρεί τη σχετική απλότητα της τομογραφίας χαμηλών συχνοτήτων, αφού οι μετρήσεις διεξάγονται και πάλι μέσω μιας συστοιχίας ηλεκτροδίων. Το ορθό πρόβλημα περιγράφεται από μια ημιστατική προσέγγιση που καταλήγει σε μια γενικευμένη εξίσωση Laplace για ένα μιγαδικό δυναμικό που εμπλέκει την κατανομή μιγαδικής διηλεκτρικής σταθεράς εr*=εr-jσ/(ωε0) μέσω της οποίας υπεισέρχεται και η συχνότητα της πηγής (ω=2πf). Ο δεύτερος στόχος της διατριβής αφορά την επέκταση της ΜΡΜ στις Μικροκυματικές συχνότητες κυρίως για τη βελτίωση της χωρικής διακριτικότητας. Το υπό μελέτη αντικείμενο περιστοιχίζεται από μια συστοιχία κεραιών, ακτινοβολείται με τη διαδοχική ενεργοποίησή τους και μετρείται το ηλεκτρικό πεδίο στις υπόλοιπες κεραίες. Αντίστοιχα, το ορθό πρόβλημα διέπεται από τη διανυσματική εξίσωση Helmholtz, η οποία επιλύεται με τη FEM. Ο απαιτούμενος Ιακωβιανός πίνακας υπολογίζεται από μια κλειστή έκφραση για την ευαισθησία η οποία προσδιορίζεται από την τεχνική του Προσαρτημένου Δικτύου σε συνδυασμό με το θεώρημα Αμοιβαιότητας του Ηλεκτρομαγνητισμού. Με τις βάσεις αυτές η διατριβή εγκαθιδρύει την ΜΡΜ στις Μικροκυματικές συχνότητες και επιβεβαιώνει τη λειτουργία της

    Simulation/Co-Simulation Alternatives for Multi-Domain, Multi- Level, Multi-Language Design of Smart Sensor: The SMAC Methodology

    No full text
    Smart Systems represent a broad class of intelligent, miniaturized systems incorporating functionalities like sensing, actuation and control. Their heterogeneity in terms of domains requires handling different levels of abstraction and different communication and synchronization styles, together with different languages and frameworks. This heavily impacts simulation. Directly simulating in a single environment such a degree of heterogeneity is impossible. On the other hand, cosimulation may lead to integration errors and to a loss in terms of accuracy due to the conjunct adoption of heterogeneous domains. Nonetheless, simulation is crucial in the design and verification loop, as well as to respect budget constraints. This work proposes the SMAC design flow for smart systems. It provides a formalization of the typical abstraction levels and design domains of a smart system. The proposed taxonomy allows identifying a precise role in the design flow for the different simulation and co-simulation scenarios. This will highlight the impact of heterogeneous and homogeneous models of computation on simulation performance. Finally, this works proposes a methodology to move from a co-simulated heterogeneity to a C++ simulatable homogeneous representation of the entire smart system, whose effectiveness is proven on a complex industrial case study

    Simulation/Co-Simulation Alternatives for Multi-Domain, Multi- Level, Multi-Language Design of Smart Sensor: The SMAC Methodology

    No full text
    Smart Systems represent a broad class of intelligent, miniaturized systems incorporating functionalities like sensing, actuation and control. Their heterogeneity in terms of domains requires handling different levels of abstraction and different communication and synchronization styles, together with different languages and frameworks. This heavily impacts simulation. Directly simulating in a single environment such a degree of heterogeneity is impossible. On the other hand, cosimulation may lead to integration errors and to a loss in terms of accuracy due to the conjunct adoption of heterogeneous domains. Nonetheless, simulation is crucial in the design and verification loop, as well as to respect budget constraints. This work proposes the SMAC design flow for smart systems. It provides a formalization of the typical abstraction levels and design domains of a smart system. The proposed taxonomy allows identifying a precise role in the design flow for the different simulation and co-simulation scenarios. This will highlight the impact of heterogeneous and homogeneous models of computation on simulation performance. Finally, this works proposes a methodology to move from a co-simulated heterogeneity to a C++ simulatable homogeneous representation of the entire smart system, whose effectiveness is proven on a complex industrial case study

    Simulation/Co-Simulation Alternatives for Multi-Domain, Multi- Level, Multi-Language Design of Smart Sensor: The SMAC Methodology

    No full text
    Smart Systems represent a broad class of intelligent, miniaturized systems incorporating functionalities like sensing, actuation and control. Their heterogeneity in terms of domains requires handling different levels of abstraction and different communication and synchronization styles, together with different languages and frameworks. This heavily impacts simulation. Directly simulating in a single environment such a degree of heterogeneity is impossible. On the other hand, co- simulation may lead to integration errors and to a loss in terms of accuracy due to the conjunct adoption of heterogeneous domains. Nonetheless, simulation is crucial in the design and verification loop, as well as to respect budget constraints. This work proposes the SMAC design flow for smart systems. It provides a formalization of the typical abstraction levels and design domains of a smart system. The proposed taxonomy allows identifying a precise role in the design flow for the different simulation and co-simulation scenarios. This will highlight the impact of heterogeneous and homogeneous models of computation on simulation performance. Finally, this works proposes a methodology to move from a co-simulated heterogeneity to a C++ simulatable homogeneous representation of the entire smart system, whose effectiveness is proven on a complex industrial case study

    Longley-Rice model prediction inaccuracies in the UHF and VHF TV bands in mountainous terrain

    No full text
    Coverage prediction is of prime importance for TV broadcasting. A classic model used for TV coverage prediction is the Longley-Rice ITM (Irregular Terrain Model). Other well-known multiple knife-edge diffraction models are the Epstein-Peterson, Deygout, and Giovaneli methods. In this paper, comparisons are presented between accurate field-strength measurements, taken by a Rohde & Schwarz FSH-3 portable spectrum analyzer using precision calibrated antennas and calculated results from the Longley-Rice model, and the multiple knife-edge models in conjunction with the 3-arc-second SRTM (Satellite Radar Topography Mission) terrain data. Calculations are limited to the main 2 knife-edges of the propagation path. The Longley-Rice model predicts received field strength accurately in most cases even in mountainous terrain with multiple diffracting obstacles in the VHF and UHF TV Bands. However, in some long distance fringe reception areas field-strength is underestimated by the Longley-Rice model, while it is accurately calculated by the multiple knife-edge diffraction models
    corecore