4,178 research outputs found

    An optically actuated surface scanning probe

    Get PDF
    We demonstrate the use of an extended, optically trapped probe that is capable of imaging surface topography with nanometre precision, whilst applying ultra-low, femto-Newton sized forces. This degree of precision and sensitivity is acquired through three distinct strategies. First, the probe itself is shaped in such a way as to soften the trap along the sensing axis and stiffen it in transverse directions. Next, these characteristics are enhanced by selectively position clamping independent motions of the probe. Finally, force clamping is used to refine the surface contact response. Detailed analyses are presented for each of these mechanisms. To test our sensor, we scan it laterally over a calibration sample consisting of a series of graduated steps, and demonstrate a height resolution of ∼ 11 nm. Using equipartition theory, we estimate that an average force of only ∼ 140 fN is exerted on the sample during the scan, making this technique ideal for the investigation of delicate biological samples

    Modeless Pointing with Low-Precision Wrist Movements

    Get PDF
    Part 1: Long and Short Papers (Continued)International audienceWrist movements are physically constrained and take place within a small range around the hand's rest position. We explore pointing techniques that deal with the physical constraints of the wrist and extend the range of its input without making use of explicit mode-switching mechanisms. Taking into account elastic properties of the human joints, we investigate designs based on rate control. In addition to pure rate control, we examine a hybrid technique that combines position and rate-control and a technique that applies non-uniform position-control mappings. Our experimental results suggest that rate control is particularly effective under low-precision input and long target distances. Hybrid and non-uniform position-control mappings, on the other hand, result in higher precision and become more effective as input precision increases

    Mobile Pointing Task in the Physical World: Balancing Focus and Performance while Disambiguating

    Get PDF
    International audienceWe address the problem of mobile distal selection of physical objects when pointing at them in augmented environments. We focus on the disambiguation step needed when several objects are selected with a rough pointing gesture. A usual disambiguation technique forces the users to switch their focus from the physical world to a list displayed on a handheld device's screen. In this paper, we explore the balance between change of users' focus and performance. We present two novel interaction techniques allowing the users to maintain their focus in the physical world. Both use a cycling mechanism, respectively performed with a wrist rolling gesture for P2Roll or with a finger sliding gesture for P2Slide. A user experiment showed that keeping users' focus in the physical world outperforms techniques that require the users to switch their focus to a digital representation distant from the physical objects, when disambiguating up to 8 objects

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusätzliche Herausforderungen: Diverse Eingabegeräte mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. Darüber hinaus zwingt der eingeschränkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurückzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusätzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. Größe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen Bedürfnisse der Benutzer zu berücksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und Produktivität von VR zu erhöhen. Zunächst werden PC-basierte Hardware und Software in die virtuelle Welt übertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen Geräten, z.B. Tastatur und Tablet, und ein VR-Modus für Anwendungen ermöglichen es dem Benutzer reale Fähigkeiten in die virtuelle Welt zu übertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-Geräte mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung für die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. Darüber hinaus werden personalisierte räumliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale Präsenz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen Fähigkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewährleisten. Darüber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare Einschränkungen der realen Welt zu überwinden und das Erlebnis von VR-Umgebungen zu steigern

    An expandable walking in place platform

    Get PDF
    The control of locomotion in 3D virtual environments should be an ordinary task, from the user point-of-view. Several navigation metaphors have been explored to control locomotion naturally, such as: real walking, the use of simulators, and walking in place. These have proven that the more natural the approach used to control locomotion, the more immerse the user will feel inside the virtual environment. Overcoming the high cost and complexity for the use of most approaches in the field, we introduce a walking in place platform that is able to identify orientation, speed for displacement, as well as lateral steps, of a person mimicking walking pattern. The detection of this information is made without use of additional sensors attached to user body. Our device is simple to mount, inexpensive and allows almost natural use, with lazy steps, thus releasing the hands for other uses. Also, we explore and test a passive, tactile surface for safe use of our platform. The platform was conceived to be utilized as an interface to control navigation in virtual environments, and augmented reality. Extending our device and techniques, we have elaborated a redirection walking metaphor, to be used together with a cave automatic virtual environment. Another metaphor allowed the use of our technique for navigating in point clouds for tagging of data. We tested the use of our technique associated with two different navigation modes: human walking and vehicle driving. In the human walking approach, the virtual orientation inhibits the displacement when sharp turns are made by the user. In vehicle mode, the virtual orientation and displacement occur together, more similar to a vehicle driving approach. We applied tests to detect preferences of navigation mode and ability to use our device to 52 subjects. We identified a preference for the vehicle driving mode of navigation. The use of statistics revealed that users learned easily the use of our technique for navigation. Users were faster walking in vehicle mode; but human mode allowed precise walking in the virtual test environment. The tactile platform proved to allow safe use of our device, being an effective and simple solution for the field. More than 200 people tested our device: UFRGS Portas Abertas in 2013 and 2014, which was a event to present to local community academic works; during 3DUI 2014, where our work was utilized together with a tool for point cloud manipulation. The main contributions of our work are a new approach for detection of walking in place, which allows simple use, with naturalness of movements, expandable for utilization in large areas (such as public spaces), and that efficiently supply orientation and speed to use in virtual environments or augmented reality, with inexpensive hardware.O controle da locomoção em ambientes virtuais 3D deveria ser uma tarefa simples, do ponto de vista do usuário. Durante os anos, metáforas para navegação têm sido exploradas para permitir o controle da locomoção naturalmente, tais como: caminhada real; uso de simuladores e imitação de caminhada. Estas técnicas provaram que, quanto mais natural à abordagem utilizada para controlar a locomoção, mais imerso o usuário vai se sentir dentro do ambiente virtual. Superando o alto custo e complexidade de uso da maioria das abordagens na área, introduzimos uma plataforma para caminhada no lugar, (usualmente reportado como wal king in place), que é capaz de identificar orientação, velocidade de deslocamento, bem como passos laterais, de uma pessoa imitando a caminhada. A detecção desta informação é feita sem o uso de sensores presos no corpo dos usuários, apenas utilizando a plataforma. Nosso dispositivo é simples de montar, barato e permite seu uso por pessoas comuns de forma quase natural, com passos pequenos, assim deixando as mãos livres para outras tarefas. Nós também exploramos e testamos uma superfície táctil passiva para utilização segura de nossa plataforma. A plataforma foi concebida para ser utilizada como uma interface para navegação em ambientes virtuais. Estendendo o uso de nossa técnica e dis positivo, nós elaboramos uma metáfora para caminhada redirecionada, para ser utilizada em conjunto com cavernas de projeção, (usualmente reportado como Cave automatic vir tual environment (CAVE)). Criamos também uma segunda metáfora para navegação, a qual permitiu o uso de nossa técnica para navegação em nuvem de pontos, auxiliando no processo de etiquetagem destes, como parte da competição para o 3D User Interface que ocorreu em Minessota, nos Estados Unidos, em 2014. Nós testamos o uso da técnica e dispositivos associada com duas nuances de navegação: caminhada humana e controle de veiculo. Na abordagem caminhada humana, a taxa de mudança da orientação gerada pelo usuário ao utilizar nosso dispositivo, inibia o deslocamento quando curvas agudas eram efetuadas. No modo veículo, a orientação e o deslocamento ocorriam conjuntamente quando o usuário utilizava nosso dispositivo e técnicas, similarmente ao processo de controle de direção de um veículo. Nós aplicamos testes para determinar o modo de navegação de preferencia para uti lização de nosso dispositivo, em 52 sujeitos. Identificamos uma preferencia pelo modo de uso que se assimila a condução de um veículo. Testes estatísticos revelaram que os usuários aprenderam facilmente a usar nossa técnica para navegar em ambientes virtuais. Os usuários foram mais rápidos utilizando o modo veículo, mas o modo humano garantiu maior precisão no deslocamento no ambiente virtual. A plataforma táctil provou permi tir o uso seguro de nosso dispositivo, sendo uma solução efetiva e simples para a área. Mais de 200 pessoas testaram nosso dispositivo e técnicas: no evento Portas Abertas da UFRGS em 2013 e 2014, um evento onde são apresentados para a comunidade local os trabalhos executados na universidade; e no 3D User Interface, onde nossa técnica e dis positivos foram utilizados em conjunto com uma ferramenta de seleção de pontos numa competição. As principais contribuições do nosso trabalho são: uma nova abordagem para de tecção de imitação de caminhada, a qual permite um uso simples, com naturalidade de movimentos, expansível para utilização em áreas grandes, como espaços públicos e que efetivamente captura informações de uso e fornece orientação e velocidade para uso em ambientes virtuais ou de realidade aumentada, com uso de hardware barato

    Agenator: An open source computer-controlled dry aging system for beef

    Get PDF
    Dry aging of beef is a process where beef is exposed to a controlled environment with the ultimate goal of drying the beef to improve its quality and value. Comprehensive investigations into the effects of various environmental conditions on dry aging are crucial for understanding and optimizing the process, but the lack of affordable equipment focused on data collection makes it difficult to do so. The Agenator was thus developed as an open source system with a suite of features for investigating dry aging such as: measuring and recording relative humidity, temperature, mass, air velocity, and fan rotational speed; precise control within 1% for relative humidity and 50 rpm for fan rotational speed; robust signal integrity preservation and data recovery features; modular design for easy addition and removal of individual chamber units; and non-permanent fixtures to allow easy adaptation of the system for other applications such as investigating dehydration of food products. The open source system comes with user-friendly computer software for interfacing with the system and creating sophisticated environmental control programs. The Agenator is available to the public at https://osf.io/87nck/

    Robot-Assisted Drilling on Curved Surfaces with Haptic Guidance under Adaptive Admittance Control

    Get PDF
    Scientific and Technological Research Council of Turkey (TUBITAK) [EEEAG-117E645]This work was supported by the Scientific and Technological Research Council of Turkey (TUBITAK) under contract number EEEAG-117E645Drilling a hole on a curved surface with a desired angle is prone to failure when done manually, due to the difficulties in drill alignment and also inherent instabilities of the task, potentially causing injury and fatigue to the workers. On the other hand, it can be impractical to fully automate such a task in real manufacturing environments because the parts arriving at an assembly line can have various complex shapes where drill point locations are not easily accessible, making automated path planning difficult. In this work, an adaptive admittance controller with 6 degrees of freedom is developed and deployed on a KUKA LBR iiwa 7 cobot such that the operator is able to manipulate a drill mounted on the robot with one hand comfortably and open holes on a curved surface with haptic guidance of the cobot and visual guidance provided through an AR interface. Real-time adaptation of the admittance damping provides more transparency when driving the robot in free space while ensuring stability during drilling. After the user brings the drill sufficiently close to the drill target and roughly aligns to the desired drilling angle, the haptic guidance module fine tunes the alignment first and then constrains the user movement to the drilling axis only, after which the operator simply pushes the drill into the workpiece with minimal effort. Two sets of experiments were conducted to investigate the potential benefits of the haptic guidance module quantitatively (Experiment I) and also the practical value of the proposed pHRI system for real manufacturing settings based on the subjective opinion of the participants (Experiment II). The results of Experiment I, conducted with 3 naive participants, show that the haptic guidance improves task completion time by 26% while decreasing human effort by 16% and muscle activation levels by 27% compared to no haptic guidance condition. The results of Experiment II, conducted with 3 experienced industrial workers, show that the proposed system is perceived to be easy to use, safe, and helpful in carrying out the drilling task.IEEE,Royal Soc Japan,IEEE Robot & Automat Soc,IES,SICE,New Technol FdnWOS:0009083682021152-s2.0-85146352560Conference Proceedings Citation Index – ScienceProceedings PaperUluslararası işbirliği ile yapılmayan - HAYIRMart2022YÖK - 2022-2
    corecore