1,096 research outputs found
Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI
This paper introduces Sketched Reality, an approach that combines AR
sketching and actuated tangible user interfaces (TUI) for bidirectional
sketching interaction. Bi-directional sketching enables virtual sketches and
physical objects to "affect" each other through physical actuation and digital
computation. In the existing AR sketching, the relationship between virtual and
physical worlds is only one-directional -- while physical interaction can
affect virtual sketches, virtual sketches have no return effect on the physical
objects or environment. In contrast, bi-directional sketching interaction
allows the seamless coupling between sketches and actuated TUIs. In this paper,
we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching
tool to demonstrate the concept. In our system, virtual sketches drawn and
simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move,
actuate, collide, and constrain physical Toio robots, as if virtual sketches
and the physical objects exist in the same space through seamless coupling
between AR and robot motion. This paper contributes a set of novel interactions
and a design space of bi-directional AR sketching. We demonstrate a series of
potential applications, such as tangible physics education, explorable
mechanism, tangible gaming for children, and in-situ robot programming via
sketching.Comment: UIST 202
Interactive form creation: exploring the creation and manipulation of free form through the use of interactive multiple input interface
Most current CAD systems support only the two most common input devices: a mouse and a keyboard that impose a limit to the degree of interaction that a user can have with the system. However, it is not uncommon for users to work together on the same computer during a collaborative task. Beside that, people tend to use both hands to manipulate 3D objects; one hand is used to orient the object while the other hand is used to perform some operation on the object. The same things could be applied to computer modelling in the conceptual phase of the design process. A designer can rotate and position an object with one hand, and manipulate the shape [deform it] with the other hand. Accordingly, the 3D object can be easily and intuitively changed through interactive manipulation of both hands.The research investigates the manipulation and creation of free form geometries through the use of interactive interfaces with multiple input devices. First the creation of the 3D model will be discussed; several different types of models will be illustrated. Furthermore, different tools that allow the user to control the 3D model interactively will be presented. Three experiments were conducted using different interactive interfaces; two bi-manual techniques were compared with the conventional one-handed approach. Finally it will be demonstrated that the use of new and multiple input devices can offer many opportunities for form creation. The problem is that few, if any, systems make it easy for the user or the programmer to use new input devices
The simultaneity of complementary conditions:re-integrating and balancing analogue and digital matter(s) in basic architectural education
The actual, globally established, general digital procedures in basic architectural education,producing well-behaved, seemingly attractive up-to-date projects, spaces and first general-researchon all scale levels, apparently present a certain growing amount of deficiencies. These limitations surface only gradually, as the state of things on overall extents is generally deemed satisfactory. Some skills, such as âold-fashionedâ analogue drawing are gradually eased-out ofundergraduate curricula and overall modus-operandi, due to their apparent slow inefficiencies in regard to various digital mediaâs rapid readiness, malleability and unproblematic, quotidian availabilities. While this state of things is understandable, it nevertheless presents a definite challenge. The challenge of questioning how the assessment of conditions and especially their representation,is conducted, prior to contextual architectural action(s) of any kind
Physical sketching tools and techniques for customized sensate surfaces
Sensate surfaces are a promising avenue for enhancing human interaction with digital systems due to their inherent intuitiveness and natural user interface. Recent technological advancements have enabled sensate surfaces to surpass the constraints of conventional touchscreens by integrating them into everyday objects, creating interactive interfaces that can detect various inputs such as touch, pressure, and gestures. This allows for more natural and intuitive control of digital systems. However, prototyping interactive surfaces that are customized to users' requirements using conventional techniques remains technically challenging due to limitations in accommodating complex geometric shapes and varying sizes. Furthermore, it is crucial to consider the context in which customized surfaces are utilized, as relocating them to fabrication labs may lead to the loss of their original design context. Additionally, prototyping high-resolution sensate surfaces presents challenges due to the complex signal processing requirements involved. This thesis investigates the design and fabrication of customized sensate surfaces that meet the diverse requirements of different users and contexts. The research aims to develop novel tools and techniques that overcome the technical limitations of current methods and enable the creation of sensate surfaces that enhance human interaction with digital systems.Sensorische OberflĂ€chen sind aufgrund ihrer inhĂ€renten IntuitivitĂ€t und natĂŒrlichen BenutzeroberflĂ€che ein vielversprechender Ansatz, um die menschliche Interaktionmit digitalen Systemen zu verbessern. Die jĂŒngsten technologischen Fortschritte haben es ermöglicht, dass sensorische OberflĂ€chen die BeschrĂ€nkungen herkömmlicher Touchscreens ĂŒberwinden, indem sie in AlltagsgegenstĂ€nde integriert werden und interaktive Schnittstellen schaffen, die diverse Eingaben wie BerĂŒhrung, Druck, oder Gesten erkennen können. Dies ermöglicht eine natĂŒrlichere und intuitivere Steuerung von digitalen Systemen. Das Prototyping interaktiver OberflĂ€chen, die mit herkömmlichen Techniken an die BedĂŒrfnisse der Nutzer angepasst werden, bleibt jedoch eine technische Herausforderung, da komplexe geometrische Formen und variierende GröĂen nur begrenzt berĂŒcksichtigt werden können. DarĂŒber hinaus ist es von entscheidender Bedeutung, den Kontext, in dem diese individuell angepassten OberflĂ€chen verwendet werden, zu berĂŒcksichtigen, da eine Verlagerung in Fabrikations-Laboratorien zum Verlust ihres ursprĂŒnglichen Designkontextes fĂŒhren kann. Zudem stellt das Prototyping hochauflösender sensorischer OberflĂ€chen aufgrund der komplexen Anforderungen an die Signalverarbeitung eine Herausforderung dar. Diese Arbeit erforscht dasDesign und die Fabrikation individuell angepasster sensorischer OberflĂ€chen, die den diversen Anforderungen unterschiedlicher Nutzer und Kontexte gerecht werden. Die Forschung zielt darauf ab, neuartigeWerkzeuge und Techniken zu entwickeln, die die technischen BeschrĂ€nkungen derzeitigerMethoden ĂŒberwinden und die Erstellung von sensorischen OberflĂ€chen ermöglichen, die die menschliche Interaktion mit digitalen Systemen verbessern
Physically Interacting With Four Dimensions
Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth
dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another.
The research presented here creates physically realistic models for
simple interactions with objects and materials in a virtual 4D world.
We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a
computer-based haptic probe to support a continuous motion that
follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object.
Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings
- âŠ