142 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationFingernail imaging is a method of sensing finger force using the color patterns on the nail and surrounding skin. These patterns form as the underlying tissue is compressed and blood pools in the surrounding vessels. Photos of the finger and surrounding skin may be correlated to the magnitude and direction of force on the fingerpad. An automated calibration routine is developed to improve the data-collection process. This includes a novel hybrid force/position controller that manages the interaction between the fingerpad and a flat surface, implemented on a Magnetic Levitation Haptic Device. The kinematic and dynamics parameters of the system are characterized in order to appropriately design a nonlinear compensator. The controller settles within 0.13 s with less than 30% overshoot. A new registration A new registration technique, based on Active Appearance Models, is presented. Since this method accounts for the variation inherent in the finger, it reduces registration and force prediction errors while removing the need to tune registration parameters or reject unregistered images. Modifications to the standard model are also investigated. The number of landmark points is reduced to 25 points with no loss of accuracy, while the use of the green channel is found to have no significant effect on either registration or force prediction accuracy. Several force prediction models are characterized, and the EigenNail Magnitude Model, a Principal Component Regression model on the gray-level intensity, is shown to fit the data most accurately. The mean force prediction error using this prediction and modeling method is 0.55 N. White LEDs and green LEDs are shown to have no statistically significant effect on registration or force prediction. Finally, two different calibration grid designs are compared and found to have no significant effect. Together, these improvements prepare the way for fingernail imaging to be used in less controlled situations. With a wider range of calibration data and a more robust registration method, a larger range of force data may be predicted. Potential applications for this technology include human-computer interaction and measuring finger interaction forces during grasping experiments

    Force-Aware Interface via Electromyography for Natural VR/AR Interaction

    Full text link
    While tremendous advances in visual and auditory realism have been made for virtual and augmented reality (VR/AR), introducing a plausible sense of physicality into the virtual world remains challenging. Closing the gap between real-world physicality and immersive virtual experience requires a closed interaction loop: applying user-exerted physical forces to the virtual environment and generating haptic sensations back to the users. However, existing VR/AR solutions either completely ignore the force inputs from the users or rely on obtrusive sensing devices that compromise user experience. By identifying users' muscle activation patterns while engaging in VR/AR, we design a learning-based neural interface for natural and intuitive force inputs. Specifically, we show that lightweight electromyography sensors, resting non-invasively on users' forearm skin, inform and establish a robust understanding of their complex hand activities. Fuelled by a neural-network-based model, our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration. Through an interactive psychophysical study, we show that human perception of virtual objects' physical properties, such as stiffness, can be significantly enhanced by our interface. We further demonstrate that our interface enables ubiquitous control via finger tapping. Ultimately, we envision our findings to push forward research towards more realistic physicality in future VR/AR.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2022

    FingerTac -- An Interchangeable and Wearable Tactile Sensor for the Fingertips of Human and Robot Hands

    Full text link
    Skill transfer from humans to robots is challenging. Presently, many researchers focus on capturing only position or joint angle data from humans to teach the robots. Even though this approach has yielded impressive results for grasping applications, reconstructing motion for object handling or fine manipulation from a human hand to a robot hand has been sparsely explored. Humans use tactile feedback to adjust their motion to various objects, but capturing and reproducing the applied forces is an open research question. In this paper we introduce a wearable fingertip tactile sensor, which captures the distributed 3-axis force vectors on the fingertip. The fingertip tactile sensor is interchangeable between the human hand and the robot hand, meaning that it can also be assembled to fit on a robot hand such as the Allegro hand. This paper presents the structural aspects of the sensor as well as the methodology and approach used to design, manufacture, and calibrate the sensor. The sensor is able to measure forces accurately with a mean absolute error of 0.21, 0.16, and 0.44 Newtons in X, Y, and Z directions, respectively

    On the use of fingernail images as transient biometric identifiers

    Get PDF
    The significant advantages that biometric recognition technologies offer are in danger of being left aside in everyday life due to concerns over the misuse of such data. The biometric data employed so far focuses on the permanence of the characteristics involved. A concept known as ‘the right to be forgotten’ is gaining momentum in international law and this should further hamper the adoption of permanent biometric recognition technologies. However, a multitude of common applications are short-term and, therefore, non-permanent biometric characteristics would suffice for them. In this paper we discuss ‘transient biometrics,’ i.e. recognition via biometric characteristics that will change in the short term and show that images of the fingernail plate can be used as a transient biometric with a useful life-span of less than 6 months. A direct approach is proposed that requires no training and a relevant evaluation dataset is made publicly available

    Accuracy of novel image acquisition and processing device in automatic segmentation of atopic dermatitis

    Get PDF
    Atopic Dermatitis (AD), a chronic inflammatory skin disease causing lesions, often causes decreased quality of life (Kapur, 2018). Segmentation, a method of illustrating the difference between lesioned and non-lesioned areas of interest (AOIs) has been the primary method for which AD has been studied (Ranteke & Jain, 2013). Manual segmentation is prone to subjectivity (Ning et al., 2014) and automatic segmentation, while reliable and efficient, poses challenges such as light reflections and color variations (Lu et al., 2013). Yet, AD can be classified from color and texture (Hanifin et al., 2001; Nisar et al., 2013), as well as through machine learning methods. The purpose of this study was to determine the optimal method for segmentation of images of atopic dermatitis on subject arms in a novel and standardized photography lightbox (Lightbox) and of images of subjects' self-acquired at-home photos. The goals of this study were to determine the accuracy and reliability of photo acquisition of arms of subjects with AD in a novel standardized photography lightbox, compared to photo acquisition by subjects at home, and determine the accuracy and reliability of automated segmentation of AD lesions with combined color-based segmentation and the U-Net CNN

    Optical Methods in Sensing and Imaging for Medical and Biological Applications

    Get PDF
    The recent advances in optical sources and detectors have opened up new opportunities for sensing and imaging techniques which can be successfully used in biomedical and healthcare applications. This book, entitled ‘Optical Methods in Sensing and Imaging for Medical and Biological Applications’, focuses on various aspects of the research and development related to these areas. The book will be a valuable source of information presenting the recent advances in optical methods and novel techniques, as well as their applications in the fields of biomedicine and healthcare, to anyone interested in this subject

    전자상자성공명 치아 누적 방사선량 측정을 위한 자석 개발 및 체내 선량평가에의 적용

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(방사선융합의생명전공), 2022. 8. 예성준.For the triage purpose in the large radiation accident situation, the in vivo electron paramagnetic resonance (EPR) tooth dosimetry is a unique and useful tool. It can rapidly distinguish irradiated ones from others. For the counter accident, the mobility to move to the accident location is also an important factor. For this purpose, a new EPR magnet was developed with the lighter weight, and the in vivo optimized design in this thesis. This was also a part of the project to develop the entire EPR spectrometer comprehensively. In the second part of the thesis, in vivo tooth dosimetry was described. Even with the dose-response curve acquired from extracted teeth, a dose-response data from in vivo measurements is required due to the different dosimetric sensitivity under in vivo circumstances, which is represented by Q factor. Also it was shown that there was difference in Q factor between individuals observed from volunteers’ teeth in their oral cavity. To reflect the difference between individuals, a new method was suggested. The newly suggested pseudo-in-vivo phantom did an important role in this method. The Q factor could be intentionally changed in the range of in vivo measurements. Throughout the thesis, the performance of the developed magnet was verified through three steps. First, the magnetic flux density was measured and compared with the finite element method (FEM) simulation. Second, EPR spectrum was acquired from irradiated teeth as the preliminary test. For this, two intact human incisors irradiated 5 and 30 Gy with 220 kVp X-ray were measured. As the final test, EPR spectra was measured from post-radiotherapy patients and the tooth absorbed doses were assessed with in vivo measurement. For this, dose-response curves for various Q factors were acquired prior to the in vivo assessments. In the process to collect the dose-response data, the aforementioned pseudo-in-vivo phantom was used. Four intact human incisor teeth were used to collect the dose-response data. From the dose-response data, the Q factor relationships between the dosimetric sensitivity and background signal was acquired. From these relationships, a patient adopted dose-response curve was generated with a patient’s specific Q factor. The irradiated doses were assessed from two post-TBI patients with this method. Based on the dose-response curves, the doses which the patients were irradiated during the treatments were estimated.대규모 방사선 사고 상황에서 부상자/환자 분류를 위한 목적에 있어 체내 전자상자성공명 치아 선량평가는 피폭된 환자를 신속하게 구분하는데 유일하면서도 유용한 방법이다. 방사선사고 대응에 있어서 사고현장으로 이동하여 사용할 수 있는 이동성은 중요한 요소로 작용한다. 전자상자성공명 분광계의 가장 무거운 부분은 자석이며, 이의 경량화 및 체내측정 최적화를 통해 치아 선량평가를 사고 현장에서 수행할 수 있도록 개발하는 것이 본 논문의 목적이다. 또한 이는 종합적으로 전자상자성공명 분광계 전체를 개발하고자 했던 지난 연구 프로젝트의 일환으로 수행되었다. 논문의 두번째 부분에서는 새로이 개발된 자석을 이용하여 체내 치아 선량평가를 수행한 내용이 설명된다. 발치된 치아로부터 선량-반응 곡선을 얻을 수 있지만 체내 환경에서 측정되는 선량-반응 정보는 선량 민감도가 다르기에 추가로 체내에서의 측정이 필요하다. 이 선량 민감도의 차이는 주로 Q 팩터의 차이를 통해 나타나게 된다. 방사선을 조사받지 않은 지원자들의 구강 내 치아로부터 체내 Q 팩터에 개인차가 있음을 확인하였다. 이 개인차를 반영하기 위한 새 방법이 본 논문에서 제안되었다. 논문에서 제작, 제안한 의사 체내 팬텀이 이 방법에서 중요한 역할을 하였다. Q 팩터를 체내 Q 팩터의 범위 내에서 의도적으로 변화시키는 것이다. 논문 전체에 걸쳐 새로 개발된 자석의 성능을 세 단계에 걸쳐 검증하였다. 첫번째로, 자석의 자속밀도를 측정하고 유한요소해석 시뮬레이션과 비교하였다. 두번째로, 방사선 조사된 발치 치아에서 전자상자성공명 스펙트럼을 획득하는 기초 테스트를 수행하였다. 여기에는 220 kVp 에너지 X-선으로 5 Gy와 30 Gy를 조사한 온전한 인간 중절치 두 개가 사용되었다. 마지막 검증 테스트로, 방사선치료 후 환자의 치아를 체내 측정하여 선량을 평가하였다. 이를 위해 사전에 Q 여러 Q 팩터에 대한 선량-반응 곡선을 얻었다. 이 선량-반응 정보를 수집하는 과정에서 앞서 언급한 의사 체내 팬텀이 사용되었다. 온전한 인간 중절치 4개로부터 선량-반응 곡선을 얻었다. 이 선량-반응 정보로부터, Q 팩터와 선량 민감도 및 배경신호의 관계를 획득할 수 있었으며, 이로부터 환자의 Q 팩터에 맞춰 환자 맞춤 선량-반응 곡선이 생성되었다. 이 맞춤 선량-반응 곡선을 기반으로 환자가 치료 중 조사된 선량을 평가하였다.Chapter 1. Development of EPR Spectrometer 1 1. Basics of Electron Paramagnetic Resonance 1 1.1. Principle of Electron Paramagnetic Resonance 1 1.2. Principle of Continuous Wave EPR Spectrometer 4 2. Development of in vivo EPR Spectrometer 6 2.1. The Motivation of the Development 6 2.1.1. In Vivo Tooth Dosimetry 6 2.1.2. Motivation of the Study 7 3. The Development of the Magnet for in vivo EPR Spectroscopy 8 3.1. The Motivation of the Development 8 3.2. Materials and Methods 11 3.2.1. Design Concept and Required Specifications 11 3.2.2. EPR Magnet Configuration 13 3.2.3. EPR Magnet Design 15 3.2.4. Analytical Calculation of Magnetic Flux Density of PMs 17 3.2.5. Magnetic Field Simulation 20 3.2.6. Magnetic Field Measurement 22 3.2.7. EPR Spectrum Acquisition 23 3.3. Results 25 3.3.1. Characteristics of Prototype Magnet System 25 3.3.2. The Magnet System Building 26 3.3.3. Prototype Magnet System 28 3.3.4. Sweep Coil 33 3.3.5. Modulation Coil Measurement 35 3.3.6. EPR Spectrum Acquisition 37 3.3.7. Thermal Stability of the Magnet 39 3.4. Discussions 41 3.4.1. Baseline Distortion of EPR Spectrum 41 3.4.2. Calibration of Modulation Coil and Sweep Coils 45 3.5. Conclusion of the Magnet Development 47 Chapter 2. In Vivo Dosimetry Method Using Pseudo-In-Vivo Phantom 48 1. Introduction 48 2. Materials and Methods 52 2.1. Pseudo-In-Vivo Phantom 52 2.2. Q Factor Measurements 54 2.3. Tooth Irradiation 56 2.4. EPR Instrument and Measurement 58 2.5. Correction with Area of Tooth Enamel 60 2.6. Post-Radiotherapy Patients In Vivo Dose Assessment 61 3. Results and Discussions 63 3.1. Measurement of Quality Factor 63 3.2. Dose-Response Calibration Curve 66 3.3. Sensitivity and Background Signal of an Arbitrary Q Factor 69 3.4. Verification of Sensitivity Difference Between Two Irradiation Situations 73 3.4.1. Experimental Verification 73 3.4.2. Verification Through Monte Carlo Simulation 79 3.5. Measurements of Post-Radiotherapy Patients 81 3.6. Effect of Irradiation Geometry of Post-Radiotherapy Patients 87 3.7. Inverse Prediction for Dose Estimation 91 3.8. Discussion About Error Level of the Post-Treatment Patients 93 3.9. Q Factor Correction: Another Method to Compensate for the Q Factor Effect 94 3.9.1. Results of Q Factor Correction 96 3.9.2. Comparison of Two Q Factor Reflection Methods 99 Chapter 3. Conclusion 100 Bibliography 101 List of Abbreviation 105 Abstract in Korean 106박

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion
    corecore