177 research outputs found

    Developing a mixed reality assistance system based on projection mapping technology for manual operations at assembly workstations.

    Get PDF
    ABSTRACT Manual tasks play an important role in social sustainable manufacturing enterprises. Commonly, manual operations are used for low volume productions, but are not limited to. Operational models in manufacturing sisters cased of x-to-order paradigms (e.g. assembly-to-order) may require manual operations to speed-up the ramp-up time of new product configuration assemblies. The implications of manual operations in any production line may imply that any manufacturing or assembly process become more susceptible to human errors and therefore translate into delays, defects and/or poor product quality. In this scenario, virtual and augmented realities can offer significant advantages to support the human operator in manual operations. This research work presents the development of a mixed (virtual and augmented) reality assistance system that permits real-time support in manual operations. A review of mixed reality techniques and technologies was conducted, where it was determined to use a projection mapping solution for the proposed assistance system. According to the specific requirements of the demonstration environment, hardware and software components were chosen. The developed mixed reality assistance system was able to guide any user without any prior knowledge through the successful completion of the specific assembly task

    User-oriented markerless augmented reality framework based on 3D reconstruction and loop closure detection

    Get PDF
    An augmented reality (AR) system needs to track the user-view to perform an accurate augmentation registration. The present research proposes a conceptual marker-less, natural feature-based AR framework system, the process for which is divided into two stages - an offline database training session for the application developers, and an online AR tracking and display session for the final users. In the offline session, two types of 3D reconstruction application, RGBD-SLAM and SfM are integrated into the development framework for building the reference template of a target environment. The performance and applicable conditions of these two methods are presented in the present thesis, and the application developers can choose which method to apply for their developmental demands. A general developmental user interface is provided to the developer for interaction, including a simple GUI tool for augmentation configuration. The present proposal also applies a Bag of Words strategy to enable a rapid "loop-closure detection" in the online session, for efficiently querying the application user-view from the trained database to locate the user pose. The rendering and display process of augmentation is currently implemented within an OpenGL window, which is one result of the research that is worthy of future detailed investigation and development

    UAVs for the Environmental Sciences

    Get PDF
    This book gives an overview of the usage of UAVs in environmental sciences covering technical basics, data acquisition with different sensors, data processing schemes and illustrating various examples of application

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Mechanical Control of Sensory Hair-Bundle Function

    Get PDF
    Hair bundles detect sound in the auditory system, head position and rotation in the vestibular system, and fluid flow in the lateral-­‐‑line system. To do so, bundles respond to periodic, static, and hydrodynamic forces contingent upon the receptor organs in which they are situated. As the mechanosensory function of a hair bundle varies, so too do the mechanical properties of the bundle and its microenvironment. Hair bundles range in height from 1 μμm to 100 μμm and in stiffness from 100 μμN·∙m-­‐‑1 to 10,000 μμN·∙m-­‐‑1. They are composed of actin-­‐‑filled, hypertrophic microvilli—stereocilia—that number from fewer than 20 through more than 300 per bundle. In addition, bundles may or may not possess one true cilium, the kinocilium. Hair bundles differ in shape across organs and organisms: they may be isodiametric, fan-­‐‑shaped, or V-­‐‑shaped. Depending on the organ in which they occur, bundles may be free-­‐‑standing or they may be coupled to a tectorial membrane, otolithic membrane, cupula, or sallet. Because all hair bundles are comprised of similar molecular components, their distinct mechanosensory functions may instead be regulated by their mechanical loads. Dynamical-­‐‑systems analysis provides mathematical predictions of hair-­‐‑bundle behavior. One such model captures the effects of mechanical loading on bundle function in a state diagram. A mechanical-­‐‑load clamp permits exploration of this state diagram by robustly controlling the loads—constant force, load stiffness, virtual drag, and virtual mass—imposed on a hair bundle. Upon changes in these mechanical parameters, the bundle’s response characteristics alter. Subjected to particular control parameters, a bundle may oscillate spontaneously or remain quiescent. It may respond nonlinearly to periodic stimuli with high sensitivity, sharp frequency tuning, and easy entrainment; or it may respond linearly with low sensitivity, broad tuning, and reluctant entrainment. The bundle’s response to a force pulse may resemble that of an edge-­‐‑detection system or a low-­‐‑pass filter. Finally, a bundle from an amphibian vestibular organ can operate in a manner qualitatively similar to that from a mammalian auditory organ, implying an essential similarity between hair bundles. The bifurcation near which a bundle’s operating point resides controls its function: the state diagram provides a functional map of mechanosensory modalities. Auditory function is best tuned near a supercritical Hopf bifurcation, whereas vestibular function is captured by a subcritical Hopf bifurcation and a cusp bifurcation. Within the proposed region vestibular responsiveness, a hair bundle exhibits mechanical excitability analogous to the electrical excitability of neurons. This behavior implies for the first time a direct relationship between the mechanical behaviors of sensory organelles and the electrical behaviors of afferent neurons. Man-­‐‑made detectors function in limited capacities, each designed for a unique purpose. A single hair bundle, on the other hand, evolved to serve multiple purposes with the requirement of only two functional traits: adaptation and nonlinear channel gating. The remarkable conservation of these capabilities thus provides unique insight into the evolution of sensory systems

    Research and technology, 1992

    Get PDF
    Selected research and technology activities at Ames Research Center, including the Moffett Field site and the Dryden Flight Research Facility, are summarized. These activities exemplify the Center's varied and productive research efforts for 1992

    The Development of a Performance Assessment Methodology for Activity Based Intelligence: A Study of Spatial, Temporal, and Multimodal Considerations

    Get PDF
    Activity Based Intelligence (ABI) is the derivation of information from a series of in- dividual actions, interactions, and transactions being recorded over a period of time. This usually occurs in Motion imagery and/or Full Motion Video. Due to the growth of unmanned aerial systems technology and the preponderance of mobile video devices, more interest has developed in analyzing people\u27s actions and interactions in these video streams. Currently only visually subjective quality metrics exist for determining the utility of these data in detecting specific activities. One common misconception is that ABI boils down to a simple resolution problem; more pixels and higher frame rates are better. Increasing resolution simply provides more data, not necessary more informa- tion. As part of this research, an experiment was designed and performed to address this assumption. Nine sensors consisting of four modalities were place on top of the Chester F. Carlson Center for Imaging Science in order to record a group of participants executing a scripted set of activities. The multimodal characteristics include data from the visible, long-wave infrared, multispectral, and polarimetric regimes. The activities the participants were scripted to cover a wide range of spatial and temporal interactions (i.e. walking, jogging, and a group sporting event). As with any large data acquisition, only a subset of this data was analyzed for this research. Specifically, a walking object exchange scenario and simulated RPG. In order to analyze this data, several steps of preparation occurred. The data were spatially and temporally registered; the individual modalities were fused; a tracking algorithm was implemented, and an activity detection algorithm was applied. To develop a performance assessment for these activities a series of spatial and temporal degradations were performed. Upon completion of this work, the ground truth ABI dataset will be released to the community for further analysis

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die Fähigkeit, Gegenstände mit unseren Händen zu greifen, erlaubt uns, diese vielfältig zu manipulieren. Werkzeuge erweitern unsere Fähigkeiten noch, indem sie Genauigkeit, Kraft und Form unserer Hände an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder Computermäuse, erlauben uns auch, die Fähigkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese Geräte verfügen bereits über Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr über den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstützen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut für diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hält und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick über Grifferkennung (*grasp sensing*) für die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher Oberflächen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primäre Beiträge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick über die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge für griffempfindliche Oberflächen und ein Framework für Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick über den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltäglichen Situationen untersucht. Diese fanden eine deutlich größere Diversität in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese Diversität erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknüpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick über verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen Oberflächen immer noch ein herausforderndes Problem ist, dass Forscher regelmäßig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken für griffempfindliche Oberflächen entwickelt. Diese mindern jeweils eine oder mehrere Schwächen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die Annäherung an das Objekt zu erkennen. Außerdem muss nicht die komplette Oberfläche des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binären Sensoren ist. **FlyEye** verwendet Lichtwellenleiterbündel, die an eine Kamera angeschlossen werden, um Annäherung und Berührung auf beliebig geformten Oberflächen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berührungs- und griffempfindlichen Objekten. Für FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die üblicherweise zur Analyse von Kabelbeschädigungen eingesetzt wird. TDRtouch erlaubt es, Berührungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dünne und flexible griffempfindliche Oberflächen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfüllen und den *design space* für griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele für die Grifferkennung nutzen nur Daten der Griffsensoren und beschränken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fünf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das Verhältnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie Oberflächenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache Vorschläge zur Entwicklung von zuverlässiger und benutzbarer Griffinteraktion
    corecore