1,335 research outputs found

    A Survey of Sketch Based Modeling Systems

    Get PDF

    Physical sketching tools and techniques for customized sensate surfaces

    Get PDF
    Sensate surfaces are a promising avenue for enhancing human interaction with digital systems due to their inherent intuitiveness and natural user interface. Recent technological advancements have enabled sensate surfaces to surpass the constraints of conventional touchscreens by integrating them into everyday objects, creating interactive interfaces that can detect various inputs such as touch, pressure, and gestures. This allows for more natural and intuitive control of digital systems. However, prototyping interactive surfaces that are customized to users' requirements using conventional techniques remains technically challenging due to limitations in accommodating complex geometric shapes and varying sizes. Furthermore, it is crucial to consider the context in which customized surfaces are utilized, as relocating them to fabrication labs may lead to the loss of their original design context. Additionally, prototyping high-resolution sensate surfaces presents challenges due to the complex signal processing requirements involved. This thesis investigates the design and fabrication of customized sensate surfaces that meet the diverse requirements of different users and contexts. The research aims to develop novel tools and techniques that overcome the technical limitations of current methods and enable the creation of sensate surfaces that enhance human interaction with digital systems.Sensorische Oberflächen sind aufgrund ihrer inhärenten Intuitivität und natürlichen Benutzeroberfläche ein vielversprechender Ansatz, um die menschliche Interaktionmit digitalen Systemen zu verbessern. Die jüngsten technologischen Fortschritte haben es ermöglicht, dass sensorische Oberflächen die Beschränkungen herkömmlicher Touchscreens überwinden, indem sie in Alltagsgegenstände integriert werden und interaktive Schnittstellen schaffen, die diverse Eingaben wie Berührung, Druck, oder Gesten erkennen können. Dies ermöglicht eine natürlichere und intuitivere Steuerung von digitalen Systemen. Das Prototyping interaktiver Oberflächen, die mit herkömmlichen Techniken an die Bedürfnisse der Nutzer angepasst werden, bleibt jedoch eine technische Herausforderung, da komplexe geometrische Formen und variierende Größen nur begrenzt berücksichtigt werden können. Darüber hinaus ist es von entscheidender Bedeutung, den Kontext, in dem diese individuell angepassten Oberflächen verwendet werden, zu berücksichtigen, da eine Verlagerung in Fabrikations-Laboratorien zum Verlust ihres ursprünglichen Designkontextes führen kann. Zudem stellt das Prototyping hochauflösender sensorischer Oberflächen aufgrund der komplexen Anforderungen an die Signalverarbeitung eine Herausforderung dar. Diese Arbeit erforscht dasDesign und die Fabrikation individuell angepasster sensorischer Oberflächen, die den diversen Anforderungen unterschiedlicher Nutzer und Kontexte gerecht werden. Die Forschung zielt darauf ab, neuartigeWerkzeuge und Techniken zu entwickeln, die die technischen Beschränkungen derzeitigerMethoden überwinden und die Erstellung von sensorischen Oberflächen ermöglichen, die die menschliche Interaktion mit digitalen Systemen verbessern

    An approach on 3D digital design: free hand form generation

    Get PDF
    To sketch is to translate a concept from mind to its first representation. Conventionally, sketching of a three dimensional idea is drawn on paper, or by building a physical model, and then adjusting it into digital translation. The thesis hypothesizes that architects employ tangible interactions to assist design-thinking tasks in early design phases. This thesis suggests another approach on 3D digital design, as a complementary resource for expressing a concept, hence enriching the creative process. A proposal for a new CAD paradigm, based on freehand form generation is detailed here, as well as the development and testing completed during the course of the research. This work describes the required characteristics of this kind of system and discusses the possibilities afforded by this new medium of expression, pointing its strengths and current limitations. The fundamental guidelines to this research were: (1) non-intrusiveness of the input and visualization devices, (2) wireless free hand drawing in 3D space, (3) instinctive interface and (4) exporting capabilities to other CAD systems. In conclusion this work argues that 3D design, based on free hand form generation, allows for an enhancement of the traditional creative process through spontaneous and immediate translation of a concept into 3D digital form

    Toward semantic model generation from sketch and multi-touch interactions

    Get PDF
    Designers usually start their design process by exploring and evolving their ideas rapidly through sketching since this helps them to make numerous attempts at creating, practicing, simulating, and representing ideas. Creativity inherent in solving the ill-defined problems (Eastman, 1969) often emerges when designers explore potential solutions while sketching in the design process (Schön, 1992). When using computer programs such as CAD or Building Information Modeling (BIM) tools, designers often preplan the tasks prior to executing commands instead of engaging in the process of designing. Researchers argue that these programs force designers to focus on how to use a tool (i.e. how to execute series of commands) rather than how to explore a design, and thus hinder creativity in the early stages of the design process (Goel, 1995; Dorta, 2007). Since recent design and documentation works have been computer-generated using BIM software, transitions between ideas in sketches and those in digital CAD systems have become necessary. By employing sketch interactions, we argue that a computer system can provide a rapid, flexible, and iterative method to create 3D models with sufficient data for facilitating smooth transitions between designers’ early sketches and BIM programs. This dissertation begins by describing the modern design workflows and discussing the necessary data to be exchanged in the early stage of design. It then briefly introduces the modern cognitive theories, including embodiment (Varela, Rosch, & Thompson, 1992), situated action (Suchman, 1986), and distributed cognition (Hutchins, 1995). It continues by identifying problems in current CAD programs used in the early stage of the design process, using these theories as lenses. After reviewing modern attempts, including sketch tools and design automation tools, we describe the design and implementation of a sketch and multi-touch program, SolidSketch, to facilitate and augment our abilities to work on ill-defined problems in the early stage of design. SolidSketch is a parametric modeling program that enables users to construct 3D parametric models rapidly through sketch and multi-touch interactions. It combines the benefits of traditional design tools, such as physical models and pencil sketches (i.e. rapid, low-cost, and flexible methods), with the computational power offered by digital modeling tools, such as CAD. To close the gap between modern BIM and traditional sketch tools, the models created with SolidSketch can be read by other BIM programs. We then evaluate the programs with comparisons to the commercial CAD programs and other sketch programs. We also report a case study in which participants used the system for their design explorations. Finally, we conclude with the potential impacts of this new technology and the next steps for ultimately bringing greater computational power to the early stages of design.Ph.D

    A new modeling interface for the pen-input displays

    Get PDF
    Abstract Sketch interactions based on interpreting multiple pen markings into a 3D shape is easy to design but not to use. First of all, it is difficult for the user to memorize a complete set of pen markings for a certain 3D shape. Secondly, the system will be waiting for the user to complete the sequence of the pen markings, often causing a certain mode error. To address these problems, we present a novel, interaction framework, suitable for interpretations based on single-stroke marking on pen-input display; within this framework 3D shape modeling operations are designed to create appropriate communication protocols.

    Embodied Interactions for Spatial Design Ideation: Symbolic, Geometric, and Tangible Approaches

    Get PDF
    Computer interfaces are evolving from mere aids for number crunching into active partners in creative processes such as art and design. This is, to a great extent, the result of mass availability of new interaction technology such as depth sensing, sensor integration in mobile devices, and increasing computational power. We are now witnessing the emergence of maker culture that can elevate art and design beyond the purview of enterprises and professionals such as trained engineers and artists. Materializing this transformation is not trivial; everyone has ideas but only a select few can bring them to reality. The challenge is the recognition and the subsequent interpretation of human actions into design intent

    Automotive gestures recognition based on capacitive sensing

    Get PDF
    Dissertação de mestrado integrado em Engenharia Eletrónica Industrial e ComputadoresDriven by technological advancements, vehicles have steadily increased in sophistication, specially in the way drivers and passengers interact with their vehicles. For example, the BMW 7 series driver-controlled systems, contains over 700 functions. Whereas, it makes easier to navigate streets, talk on phone and more, this may lead to visual distraction, since when paying attention to a task not driving related, the brain focus on that activity. That distraction is, according to studies, the third cause of accidents, only surpassed by speeding and drunk driving. Driver distraction is stressed as the main concern by regulators, in particular, National Highway Transportation Safety Agency (NHTSA), which is developing recommended limits for the amount of time a driver needs to spend glancing away from the road to operate in-car features. Diverting attention from driving can be fatal; therefore, automakers have been challenged to design safer and comfortable human-machine interfaces (HMIs) without missing the latest technological achievements. This dissertation aims to mitigate driver distraction by developing a gestural recognition system that allows the user a more comfortable and intuitive experience while driving. The developed system outlines the algorithms to recognize gestures using the capacitive technology.Impulsionados pelos avanços tecnológicos, os automóveis tem de forma continua aumentado em complexidade, sobretudo na forma como os conductores e passageiros interagem com os seus veículos. Por exemplo, os sistemas controlados pelo condutor do BMW série 7 continham mais de 700 funções. Embora, isto facilite a navegação entre locais, falar ao telemóvel entre outros, isso pode levar a uma distração visual, já que ao prestar atenção a uma tarefa não relacionados com a condução, o cérebro se concentra nessa atividade. Essa distração é, de acordo com os estudos, a terceira causa de acidentes, apenas ultrapassada pelo excesso de velocidade e condução embriagada. A distração do condutor é realçada como a principal preocupação dos reguladores, em particular, a National Highway Transportation Safety Agency (NHTSA), que está desenvolvendo os limites recomendados para a quantidade de tempo que um condutor precisa de desviar o olhar da estrada para controlar os sistemas do carro. Desviar a atenção da conducção, pode ser fatal; portanto, os fabricante de automóveis têm sido desafiados a projetar interfaces homemmáquina (HMIs) mais seguras e confortáveis, sem perder as últimas conquistas tecnológicas. Esta dissertação tem como objetivo minimizar a distração do condutor, desenvolvendo um sistema de reconhecimento gestual que permite ao utilizador uma experiência mais confortável e intuitiva ao conduzir. O sistema desenvolvido descreve os algoritmos de reconhecimento de gestos usando a tecnologia capacitiva.It is worth noting that this work has been financially supported by the Portugal Incentive System for Research and Technological Development in scope of the projects in co-promotion number 036265/2013 (HMIExcel 2013-2015), number 002814/2015 (iFACTORY 2015-2018) and number 002797/2015 (INNOVCAR 2015-2018)

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren
    • …
    corecore