10 research outputs found
Paper and pen: A 3D sketching system
This paper proposes a method that resembles a natural pen and paper interface to create curve based 3D sketches. The system is particularly useful for representing initial 3D design ideas without much effort. Users interact with the system by the help of a pressure sensitive pen tablet. The input strokes of the users are projected onto a drawing plane, which serves as a paper that they can place anywhere in the 3D scene. The resulting 3D sketch is visualized emphasizing depth perception. Our evaluation involving several naive users suggest that the system is suitable for a broad range of users to easily express their ideas in 3D. We further analyze the system with the help of an architect to demonstrate the expressive capabilities. © 2013 Springer-Verlag London
A novel shape descriptor based on salient keypoints detection for binary image matching and retrieval
We introduce a shape descriptor that extracts keypoints from binary images and
automatically detects the salient ones among them. The proposed descriptor operates as
follows: First, the contours of the image are detected and an image transformation is used to
generate background information. Next, pixels of the transformed image that have specific
characteristics in their local areas are used to extract keypoints. Afterwards, the most salient
keypoints are automatically detected by filtering out redundant and sensitive ones. Finally,
a feature vector is calculated for each keypoint by using the distribution of contour points
in its local area. The proposed descriptor is evaluated using public datasets of silhouette
images, handwritten math expressions, hand-drawn diagram sketches, and noisy scanned
logos. Experimental results show that the proposed descriptor compares strongly against
state of the art methods, and that it is reliable when applied on challenging images such as
fluctuated handwriting and noisy scanned images. Furthermore, we integrate our descripto
A ShortStraw-based algorithm for corner finding in sketch-based interfaces
We present IStraw, a corner finding technique based on the ShortStraw algorithm. This new algorithm addresses deficiencies with ShortStraw while maintaining its simplicity and efficiency. We also develop an extension for ink strokes containing curves and arcs. We compare our algorithm against ShortStraw and two other state of the art corner finding approaches, MergeCF and Sezgin\u27s scale space algorithm. Based on an all-or-nothing accuracy metric, IStraw shows significant improvements over these algorithms for ink strokes with and without curves. (C) 2010 Elsevier Ltd. All rights reserved
Automatic 3D human modeling: an initial stage towards 2-way inside interaction in mixed reality
3D human models play an important role in computer graphics applications from a wide range of domains, including education, entertainment, medical care simulation and military training. In many situations, we want the 3D model to have a visual appearance that matches that of a specific living person and to be able to be controlled by that person in a natural manner. Among other uses, this approach supports the notion of human surrogacy, where the virtual counterpart provides a remote presence for the human who controls the virtual character\u27s behavior. In this dissertation, a human modeling pipeline is proposed for the problem of creating a 3D digital model of a real person. Our solution involves reshaping a 3D human template with a 2D contour of the participant and then mapping the captured texture of that person to the generated mesh. Our method produces an initial contour of a participant by extracting the user image from a natural background. One particularly novel contribution in our approach is the manner in which we improve the initial vertex estimate. We do so through a variant of the ShortStraw corner-finding algorithm commonly used in sketch-based systems. Here, we develop improvements to ShortStraw, presenting an algorithm called IStraw, and then introduce adaptations of this improved version to create a corner-based contour segmentatiuon algorithm. This algorithm provides significant improvements on contour matching over previously developed systems, and does so with low computational complexity. The system presented here advances the state of the art in the following aspects. First, the human modeling process is triggered automatically by matching the participant\u27s pose with an initial pose through a tracking device and software. In our case, the pose capture and skeletal model are provided by the Microsoft Kinect and its associated SDK. Second, color image, depth data, and human tracking information from the Kinect and its SDK are used to automatically extract the contour of the participant and then generate a 3D human model with skeleton. Third, using the pose and the skeletal model, we segment the contour into eight parts and then match the contour points on each segment to a corresponding anchor set associated with a 3D human template. Finally, we map the color image of the person to the 3D model as its corresponding texture map. The whole modeling process only take several seconds and the resulting human model looks like the real person. The geometry of the 3D model matches the contour of the real person, and the model has a photorealistic texture. Furthermore, the mesh of the human model is attached to the skeleton provided in the template, so the model can support programmed animations or be controlled by real people. This human control is commonly done through a literal mapping (motion capture) or a gesture-based puppetry system. Our ultimate goal is to create a mixed reality (MR) system, in which the participants can manipulate virtual objects, and in which these virtual objects can affect the participant, e.g., by restricting their mobility. This MR system prototype design motivated the work of this dissertation, since a realistic 3D human model of the participant is an essential part of implementing this vision
Recommended from our members
Integration of sketch-based ideation and 3D modeling with CAD systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis is concerned with the study of how sketch-based systems can be improved to enhance idea generation process in conceptual design stage. It is also concerned with achieving a kind of integration between sketch-based systems and CAD systems to complete the digitization of the design process as sketching phase is still not integrated with other phases due to the different nature of it and the incomplete digitization of sketching phase itself. Previous studies identified three main related issues: sketching process, sketch-based modeling, and the integration between the digitized design phases. Here, the thesis is motivated from the desire to improve sketch-based modeling to support idea generation process but unlike previous studies that only focused on the technical or drawing part of sketching, this thesis attempts to concentrate more on the mental part of the sketching process which play a key role in developing ideas in design. Another motivation of this thesis is to produce a kind of integration between sketch-based systems and CAD systems to enable 3D models produced by sketching to be edited in detailed design stage. As such, there are two main contributions have been addressed in this thesis. The first contribution is the presenting of a new approach in designing
sketch-based systems that enable more support for idea generation by separating thinking and developing ideas from the 3D modeling process. This kind of separation allows designers to think freely and concentrate more on their ideas rather than 3D modeling. the second contribution is achieving a kind of integration between gesture-based systems and CAD systems by using an IGES file in exchanging data between systems and a new method to organize data within the file in an order that make it more understood by feature recognition embedded in commercial CAD systems.This study is funded by the Ministry of Higher Education of Egypt
Envisioning sketch recognition : a local feature based approach to recognizing informal sketches
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 89-94).Hand drawn sketches are an important part of the early design process and are an important aspect of creative design. They are used in many fields including electrical engineering, software engineering and web design. Recognizing shapes in these sketches is a challenging task due to the imprecision with which they are drawn. We tackle this challenge with a visual approach to recognition. The approach is based on a representation of a sketched shape in terms of the visual parts it is made of. By taking this part-based visual approach we are able to recognize shapes that are extremely difficult to recognize with current sketch recognition systems that focus on the individual strokes.by Michael Oltmans.Ph.D
Music, mind and health : how community change, diagnosis, and neuro-rehabilitation can be targeted during creative tasks.
Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. [127]-145).As a culture, we have the capacity to lead creative lives. Part of that capacity lies in how something like music can touch on just about every aspect of human thinking and experience. If music is such a pervasive phenomenon, what does it mean for the way we consider our lives in health? There are three problems with connecting the richness of music to scientifically valid clinical interventions. First, it is unclear how to provide access to something as seemingly complex as music to a diverse group of subjects with various cognitive and physical deficits. Second, it is necessary to quantify what takes place in music interactions so that causality can be attributed to what is unique to the music experience compared to motivation or attention. Finally, one must provide the structure to facilitate clinical change without losing the communicative and expressive power of music. This thesis will demonstrate how new music technologies are the ideal interfaces to address the issues of scale, assessment, and structured intervention that plague the ability to introduce creative work into healthcare environments. Additionally, we describe the first neural interface for multisensory-based physical rehabilitation, with implications for new interventions in diverse settings. This thesis demonstrates the design and implementation of devices that structure music interaction from the neural basis of rehabilitation. At the conclusion of this research, it is possible to envision an area where users are empowered during scientifically based creative tasks to compose neurological change.by Adam Boulanger.Ph.D