5,238 research outputs found

    An Embedded Auto-Calibrated Offset Current Compensation Technique for PPG/fNIRS System

    Full text link
    Usually, the current generated by the photodiode proportional to the oxygenated blood in the photoplethysmography (PPG) and functional infrared spectroscopy (fNIRS) based recording systems is small as compared to the offset-current. The offset current is the combination of the dark current of the photodiode, the current due to ambient light, and the current due to the reflected light from fat and skull . The relatively large value of the offset current limits the amplification of the signal current and affects the overall performance of the PPG/fNIRS recording systems. In this paper, we present a mixed-signal auto-calibrated offset current compensation technique for PPG and fNIRS recording systems. The system auto-calibrates the offset current, compensates using a dual discrete loop technique, and amplifies the signal current. Thanks to the amplification, the system provides better sensitivity. A prototype of the system is built and tested for PPG signal recording. The prototype is developed for a 3.3 V single supply. The results show that the proposed system is able to effectively compensate for the offset current

    HATSouth: a global network of fully automated identical wide-field telescopes

    Full text link
    HATSouth is the world's first network of automated and homogeneous telescopes that is capable of year-round 24-hour monitoring of positions over an entire hemisphere of the sky. The primary scientific goal of the network is to discover and characterize a large number of transiting extrasolar planets, reaching out to long periods and down to small planetary radii. HATSouth achieves this by monitoring extended areas on the sky, deriving high precision light curves for a large number of stars, searching for the signature of planetary transits, and confirming planetary candidates with larger telescopes. HATSouth employs 6 telescope units spread over 3 locations with large longitude separation in the southern hemisphere (Las Campanas Observatory, Chile; HESS site, Namibia; Siding Spring Observatory, Australia). Each of the HATSouth units holds four 0.18m diameter f/2.8 focal ratio telescope tubes on a common mount producing an 8.2x8.2 arcdeg field, imaged using four 4Kx4K CCD cameras and Sloan r filters, to give a pixel scale of 3.7 arcsec/pixel. The HATSouth network is capable of continuously monitoring 128 square arc-degrees. We present the technical details of the network, summarize operations, and present weather statistics for the 3 sites. On average each of the 6 HATSouth units has conducted observations on ~500 nights over a 2-year time period, yielding a total of more than 1million science frames at 4 minute integration time, and observing ~10.65 hours per day on average. We describe the scheme of our data transfer and reduction from raw pixel images to trend-filtered light curves and transiting planet candidates. Photometric precision reaches ~6 mmag at 4-minute cadence for the brightest non-saturated stars at r~10.5. We present detailed transit recovery simulations to determine the expected yield of transiting planets from HATSouth. (abridged)Comment: 25 pages, 11 figures, 1 table, submitted to PAS

    Pre-Flight Calibration of the Mars 2020 Rover Mastcam Zoom (Mastcam-Z) Multispectral, Stereoscopic Imager

    Get PDF
    The NASA Perseverance rover Mast Camera Zoom (Mastcam-Z) system is a pair of zoomable, focusable, multi-spectral, and color charge-coupled device (CCD) cameras mounted on top of a 1.7 m Remote Sensing Mast, along with associated electronics and two calibration targets. The cameras contain identical optical assemblies that can range in focal length from 26 mm (25.5∘×19.1∘ FOV) to 110 mm (6.2∘×4.2∘ FOV) and will acquire data at pixel scales of 148-540 μm at a range of 2 m and 7.4-27 cm at 1 km. The cameras are mounted on the rover’s mast with a stereo baseline of 24.3±0.1 cm and a toe-in angle of 1.17±0.03∘ (per camera). Each camera uses a Kodak KAI-2020 CCD with 1600×1200 active pixels and an 8 position filter wheel that contains an IR-cutoff filter for color imaging through the detectors’ Bayer-pattern filters, a neutral density (ND) solar filter for imaging the sun, and 6 narrow-band geology filters (16 total filters). An associated Digital Electronics Assembly provides command data interfaces to the rover, 11-to-8 bit companding, and JPEG compression capabilities. Herein, we describe pre-flight calibration of the Mastcam-Z instrument and characterize its radiometric and geometric behavior. Between April 26thth and May 9thth, 2019, ∼45,000 images were acquired during stand-alone calibration at Malin Space Science Systems (MSSS) in San Diego, CA. Additional data were acquired during Assembly Test and Launch Operations (ATLO) at the Jet Propulsion Laboratory and Kennedy Space Center. Results of the radiometric calibration validate a 5% absolute radiometric accuracy when using camera state parameters investigated during testing. When observing using camera state parameters not interrogated during calibration (e.g., non-canonical zoom positions), we conservatively estimate the absolute uncertainty to be 0.2 design requirement. We discuss lessons learned from calibration and suggest tactical strategies that will optimize the quality of science data acquired during operation at Mars. While most results matched expectations, some surprises were discovered, such as a strong wavelength and temperature dependence on the radiometric coefficients and a scene-dependent dynamic component to the zero-exposure bias frames. Calibration results and derived accuracies were validated using a Geoboard target consisting of well-characterized geologic samples

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion

    The role of cerebellar nuclear GABAergic neurotransmission in eyeblink motor control

    Get PDF
    One of the best understood models of motor learning is the eyeblink classical conditioning paradigm in rabbits. Eyeblink conditioning relies on cerebellar circuits for the generation and expression of conditioned responses (CRs). Although these circuits have been studied extensively, their specific function is unknown and highly debated. To this end, a series of experiments were conducted to gain insight into the role of the intermediate cerebellum in the timing and retention of CRs. The first objective of our research was to develop an accurate method to record rabbit eyeblinks. We developed an infrared, frequency modulated, non-invasive sensor with a wide-field of view. The sensor was tested against previous opto-electric, electromechanical, and video recording systems and as a result of its accuracy in eyeblink detection, this sensor was used in all subsequent experiments. In the first group of neuropharmacological experiments, we examined the effects of inactivating the cerebellar cortical GABAergic Purkinje cell projection to the interposed nuclei (IN) on CR expression. This study successfully reconciled a long-standing controversy by documenting the dose-dependency of behavioral effects. While low doses of GABAergic blockers shortened CR latencies (short-latency responses - SLRs), the high doses of these drugs abolished CRs. In addition, low doses of GABA blockers facilitated the expression of unconditioned eyeblinks and increased eyelid closure. These data indicate that CR timing is altered only during an incomplete block of the cortical projections to the IN and that the intermediate cerebellum controls non-associative components of blinking. The next group of experiments examined whether SLRs are triggered by cerebellum-mediated sensory information. To address this question, we inactivated the conditioned stimulus (CS)-carrying axons in the middle cerebellar peduncle (MCP) in rabbits producing SLRs. We found that blocking CS information from entering the intermediate cerebellum does in fact abolish both CRs and SLRs. This finding suggests that SLRs are cerebellum-dependent responses that are evoked by residual CS information entering the cerebellum via incompletely blocked cortical projections to the nuclei. In the last group of experiments we tested whether the behavioral effects of MCP inactivation could be attributed to a tonic malfunction of cerebellar circuits. Classically conditioned rabbits were injected with sodium channel blocker tetrodotoxin (TTX) in the MCP while recording from cells in the interposed nuclei (IN). This treatment abolished CRs and elevated the spontaneous activity of IN neurons. Surprisingly, the CS-related modulation was not blocked and in some cases it increased. These observations suggest that normal functioning of the MCP is critical for CR expression, and the persistence of CS-related IN activity indicates that a significant portion of CS information reaches the cerebellum through pathways other than the ipsilateral MCP

    Ambient RF energy harvesting and efficient DC-load inductive power transfer

    Get PDF
    This thesis analyses in detail the technology required for wireless power transfer via radio frequency (RF) ambient energy harvesting and an inductive power transfer system (IPT). Radio frequency harvesting circuits have been demonstrated for more than fifty years, but only a few have been able to harvest energy from freely available ambient (i.e. non-dedicated) RF sources. To explore the potential for ambient RF energy harvesting, a city-wide RF spectral survey was undertaken in London. Using the results from this survey, various harvesters were designed to cover four frequency bands from the largest RF contributors within the ultra-high frequency (0.3 to 3 GHz) part of the frequency spectrum. Prototypes were designed, fabricated and tested for each band and proved that approximately half of the London Underground stations were found to be suitable locations for harvesting ambient RF energy using the prototypes. Inductive Power Transfer systems for transmitting tens to hundreds of watts have been reported for almost a decade. Most of the work has concentrated on the optimization of the link efficiency and have not taken into account the efficiency of the driver and rectifier. Class-E amplifiers and rectifiers have been identified as ideal drivers for IPT applications, but their power handling capability at tens of MHz has been a crucial limiting factor, since the load and inductor characteristics are set by the requirements of the resonant inductive system. The frequency limitation of the driver restricts the unloaded Q-factor of the coils and thus the link efficiency. The system presented in this work alleviates the use of heavy and expensive field-shaping techniques by presenting an efficient IPT system capable of transmitting energy with high dc-to-load efficiencies at 6 MHz across a distance of 30 cm.Open Acces

    Rapid 3D Modeling and Parts Recognition on Automotive Vehicles Using a Network of RGB-D Sensors for Robot Guidance

    Get PDF
    This paper presents an approach for the automatic detection and fast 3D profiling of lateral body panels of vehicles. The work introduces a method to integrate raw streams from depth sensors in the task of 3D profiling and reconstruction and a methodology for the extrinsic calibration of a network of Kinect sensors. This sensing framework is intended for rapidly providing a robot with enough spatial information to interact with automobile panels using various tools. When a vehicle is positioned inside the defined scanning area, a collection of reference parts on the bodywork are automatically recognized from a mosaic of color images collected by a network of Kinect sensors distributed around the vehicle and a global frame of reference is set up. Sections of the depth information on one side of the vehicle are then collected, aligned, and merged into a global RGB-D model. Finally, a 3D triangular mesh modelling the body panels of the vehicle is automatically built. The approach has applications in the intelligent transportation industry, automated vehicle inspection, quality control, automatic car wash systems, automotive production lines, and scan alignment and interpretation

    NaRPA: Navigation and Rendering Pipeline for Astronautics

    Full text link
    This paper presents Navigation and Rendering Pipeline for Astronautics (NaRPA) - a novel ray-tracing-based computer graphics engine to model and simulate light transport for space-borne imaging. NaRPA incorporates lighting models with attention to atmospheric and shading effects for the synthesis of space-to-space and ground-to-space virtual observations. In addition to image rendering, the engine also possesses point cloud, depth, and contour map generation capabilities to simulate passive and active vision-based sensors and to facilitate the designing, testing, or verification of visual navigation algorithms. Physically based rendering capabilities of NaRPA and the efficacy of the proposed rendering algorithm are demonstrated using applications in representative space-based environments. A key demonstration includes NaRPA as a tool for generating stereo imagery and application in 3D coordinate estimation using triangulation. Another prominent application of NaRPA includes a novel differentiable rendering approach for image-based attitude estimation is proposed to highlight the efficacy of the NaRPA engine for simulating vision-based navigation and guidance operations.Comment: 49 pages, 22 figure

    INDUCTION ASSISTED THERMOGRAPHY FOR INSPECTION OF MICRO DEFECTS ON SHEET METALS

    Get PDF
    The work focuses on Induction assisted Thermography as a non-contact and non-destructive method of Inspecting micro defects on sheet metals used for making automotive body panels. Induction heating as a source of excitation to elevate the temperatures of sheet metals for uniform heating and detect ability of the defect is the main objective of the study. Experiments are done on sheet metal samples with defects using excitation techniques like Pulse and Electromagnetic Induction. The thermal images obtained from the infrared camera are used to quantitatively analyze the detect ability of defects on sheet metals. The limitations of using Pulse technique and advantages of using Electromagnetic Induction technique for these kinds of defects are discussed. Spatial distribution of temperature for various experimental conditions is also discussed to optimize induction heating requirements
    corecore