126 research outputs found

    MeasureIt-ARCH: A Tool for Facilitating Architectural Design in the Open Source Software Blender

    Get PDF
    This thesis discusses the design and synthesis of MeasureIt-ARCH, a GNU GPL licensed software add-on developed by the author in order to add functionality to the Open Source 3D modeling software Blender that facilitates the creation of architectural drawings. MeasureIt-ARCH adds to Blender simple tools to dimension and annotate 3D models, as well as basic support for the definition and drawing of line work. These tools for the creation of dimensions, annotations and line work are designed to be used in tandem with Blender's existing modelling and rendering tool set. While the drawings that MeasureIt-ARCH produces are fundamentally conventional, as are the majority of the techniques that MeasureIt-ARCH employs to create them, MeasureIt-ARCH does provide two simple and relatively novel methods in its drawing systems. MeasureIt-ARCH provides a new method for the placement of dimension elements in 3D space that draws on the dimension's three dimensional context and surrounding geometry order to determine a placement that optimizes legibility. This dimension placement method does not depend on a 2D work plane, a convention that is common in industry standard Computer Aided Design software. MeasureIt-ARCH also implements a new approach for drawing silhouette lines that operates by transforming the silhouetted models geometry in 4D 'Clip Space'. The hope of this work is that MeasureIt-ARCH might be a small step towards creating an Open Source design pipeline for Architects. A step towards creating architectural drawings that can be shared, read, and modified by anyone, within a platform that is itself free to be changed and improved. The creation of MeasureIt-ARCH is motivated by two goals. First, the work aims to create a basic functioning Open Source platform for the creation of architectural drawings within Blender that is publicly and freely available for use. Second, MeasureIt-ARCH's development served as an opportunity to engage in an interdisciplinary act of craft, providing the author an opportunity to explore the act of digital tool making and gain a basic competency in this intersection between Architecture and Computer Science. To achieve these goals, MeasureIt-ARCH's development draws on references from the history of line drawing and dimensioning within Architecture and Computer Science. On the Architectural side, we make use of the history of architectural drawing and dimensioning conventions as described by Mario Carpo, Alberto PĂ©rez GĂłmez and others, as well as more contemporary frameworks for the classification of architectural software, such as Mark Bew and Mervyn Richard's BIM Levels framework, in order to help determine the scope of MeasureIt-ARCH's feature set. When crafting MeasureIt-ARCH, precedent works from the field of Computer Science that implement methods for producing line drawings from 3D models helped inform the author’s approach to line drawing. In particular this work draws on the overview of line drawing methods produced by BĂ©nard Pierre and Aaron Hertzmann, Arthur Appel's method for line drawing using 'Quantitative Invisibility', the techniques employed in the Freestyle line drawing system created by Grabli et al. as well as other to help inform MeasureIt-ARCH's simple drawing tools. Beyond discussing MeasureIt-ARCH's development and its motivations, this thesis also provides three small speculative discussions about the implications that an Open Source design tool might have on the architectural profession. We investigate MeasureIt-ARCH's use for small scale architectural projects in a practical setting, using it's tool set to produce conceptual design and renovation drawings for cottages at the Lodge at Pine Cove. We provide a demonstration of how MeasureIt-ARCH and Blender can integrate with external systems and other Blender add-ons to produce a proof of concept, dynamic data visualization of the Noosphere installation at the Futurium center in Berlin by the Living Architecture Systems Group. Finally, we discuss the tool's potential to facilitate greater engagement with the Open Source Architecture (OSArc) movement by illustrating a case study of the work done by Alastair Parvin and Clayton Prest on the WikiHouse project, and by highlighting the challenges that face OSArc projects as they try to produce Open Source Architecture without an Open Source design software

    Thinking interactively with visualization

    Get PDF
    Interaction is becoming an integral part of using visualization for analysis. When interaction is tightly and appropriately coupled with visualization, it can transform the visualization from display- ing static imagery to assisting comprehensive analysis of data at all scales. In this relationship, a deeper understanding of the role of interaction, its effects, and how visualization relates to interaction is necessary for designing systems in which the two components complement each other. This thesis approaches interaction in visualization from three different perspectives. First, it considers the cost of maintaining interaction in manipulating visualization of large datasets. Namely, large datasets often require a simplification process for the visualization to maintain interactivity, and this thesis examines how simplification affects the resulting visualization. Secondly, example interactive visual analytical systems are presented to demonstrate how interactivity could be applied in visualization. Specifically, four fully developed systems for four distinct problem domains are discussed to determine the common role of interactivity in these visualizations that make the systems successful. Lastly, this thesis presents evidence that interactions are important for analytical tasks using visualizations. Interaction logs of financial analysts using a visualization were collected, coded, and examined to determine the amount of analysis strategies contained within the interaction logs. The finding supports the benefits of high interactivity in analytical tasks when using a visualization. The example visualizations used to support these three perspectives are diverse in their goals and features. However, they all share similar design guidelines and visualization principles. Based on their characteristics, this thesis groups these visualizations into urban visualization, visual analytical systems, and interaction capturing and discusses them separately in terms of lessons learned and future directions

    Digitally reconstructing the Great Parchment Book:3D recovery of fire-damaged historical documents

    Get PDF
    The Great Parchment Book of the Honourable the Irish Society is a major surviving historical record of the estates of the county of Londonderry (in modern day Northern Ireland). It contains key data about landholding and population in the Irish province of Ulster and the city of Londonderry and its environs in the mid-17th century, at a time of social, religious, and political upheaval. Compiled in 1639, it was severely damaged in a fire in 1786, and due to the fragile state of the parchment, its contents have been mostly inaccessible since. We describe here a long-term, interdisciplinary, international partnership involving conservators, archivists, computer scientists, and digital humanists that developed a low-cost pipeline for conserving, digitizing, 3D-reconstructing, and virtually flattening the fire-damaged, buckled parchment, enabling new readings and understanding of the text to be created. For the first time, this article presents a complete overview of the project, detailing the conservation, digital acquisition, and digital reconstruction methods used, resulting in a new transcription and digital edition of the text in time for the 400th anniversary celebrations of the building of Londonderry’s city walls in 2013. We concentrate on the digital reconstruction pipeline that will be of interest to custodians of similarly fire-damaged historical parchment, whilst highlighting how working together on this project has produced an online resource that has focussed community reflection upon an important, but previously inaccessible, historical text

    Digital Restoration of Damaged Historical Parchment

    Get PDF
    In this thesis we describe the development of a pipeline for digitally restoring damaged historical parchment. The work was carried out in collaboration with London Metropolitan Archives (LMA), who are in possession of an extremely valuable 17th century document called The Great Parchment Book. This book served as the focus of our project and throughout this thesis we demonstrate our methods on its folios. Our aim was to expose the content of the book in a legible form so that it can be properly catalogued and studied. Our approach begins by acquiring an accurate digitisation of the pages. We have developed our own 3D reconstruction pipeline detailed in Chapter 5 in which each parchment is imaged using a hand-held digital-SLR camera, and the resulting image set is used to generate a high-resolution textured 3D reconstruction of each parchment. Investigation into methods for flatting the parchments demonstrated an analogy with surface parametrization. Flattening the entire parchment globally with various existing parametrization algorithms is problematic, as discussed in Chapters 4, 6, and 7, since this approach is blind to the distortion undergone by the parchment. We propose two complementary approaches to deal with this issue. Firstly, exploiting the fact that a reader will only ever inspect a small area of the folio at a given time, we proposed a method for performing local undistortion of the parchments inside an interactive viewer application. The application, described in Chapter 6, allows a user to browse a parchment folio as the application un-distorts in real-time the area of the parchment currently under inspection. It also allows the user to refer back to the original image set of the parchment to help with resolving ambiguities in the reconstruction and to deal with issues of provenance. Secondly, we proposed a method for estimating the actual deformation undergone by each parchment when it was damaged by using cues in the text. Since the text was originally written in straight lines and in a roughly uniform script size, we can detect the the variation in text orientation and size and use this information to estimate the deformation. in Chapter 7 we then show how this deformation can be inverted by posing the problem as a Poisson mesh deformation, and solving it in a way that guarantees local injectivity, to generate a globally flattened and undistorted image of each folio. We also show how these images can optionally be colour corrected to remove the shading cues baked into the reconstruction texture, and the discolourations in the parchment itself, to further improve legibility and give a more complete impression that the parchment has been restored. The methods we have developed have been very well received by London Metropolitan Archives, as well the the larger archival community. We have used the methods to digitise the entire Great Parchment Book, and have demonstrated our global flattening method on eight folios. As of the time of writing of this thesis, our methods are being used to virtually restore all of the remaining folios of the Great Parchment Book. Staff at LMA are also investigating potential future directions by experimenting with other interesting documents in their collections, and are exploring the possibility of setting up a service which would give access to our methods to other archival institutions with similarly damaged documents

    Mobile Augmented Reality: User Interfaces, Frameworks, and Intelligence

    Get PDF
    Mobile Augmented Reality (MAR) integrates computer-generated virtual objects with physical environments for mobile devices. MAR systems enable users to interact with MAR devices, such as smartphones and head-worn wearables, and perform seamless transitions from the physical world to a mixed world with digital entities. These MAR systems support user experiences using MAR devices to provide universal access to digital content. Over the past 20 years, several MAR systems have been developed, however, the studies and design of MAR frameworks have not yet been systematically reviewed from the perspective of user-centric design. This article presents the first effort of surveying existing MAR frameworks (count: 37) and further discuss the latest studies on MAR through a top-down approach: (1) MAR applications; (2) MAR visualisation techniques adaptive to user mobility and contexts; (3) systematic evaluation of MAR frameworks, including supported platforms and corresponding features such as tracking, feature extraction, and sensing capabilities; and (4) underlying machine learning approaches supporting intelligent operations within MAR systems. Finally, we summarise the development of emerging research fields and the current state-of-the-art, and discuss the important open challenges and possible theoretical and technical directions. This survey aims to benefit both researchers and MAR system developers alike.Peer reviewe

    Hardware Accelerated Text Display

    Get PDF
    Web browsers and e-book are some of the most dominant applications on mobile devices today. They spend a significant amount of time handling text in these documents. Based on the experimental results from different commercial web browsers, the majority of the time spent to display text is dedicated to layout design and painting the bitmaps of the character glyphs on the screen; the time needed to rasterize the bitmaps of these glyphs is negligible. Many efforts have been made in software to improve the performance of text layout and display and very few are trying to come up with parallel processing schemes for System-On-Chip (SoC) designs to better handle this graphic processing. This work introduces a new novel hardware-software hybrid algorithm which performs the layout design of text and displays it faster by using a small piece of hardware which can easily be added to the SoCs of today\u27s mobile devices. This work also introduces a novel method for applying kerning to layout design process. The performance of the algorithms are compared to WebKit, the most widely used web rendering framework, and has resulted in a 29X and 192X performance increases in layout design when kerning is both used and not used respectively

    Aesthetic Animism: Digital Poetry as Ontological Probe

    Get PDF
    This thesis is about the poetic edge of language and technology. It inter-relates both computational creation and poetic reception by analysing typographic animation softwares and meditating (speculatively) on a future malleable language that possesses the quality of being (and is implicitly perceived as) alive. As such it is a composite document: a philosophical and practice-based exploration of how computers are transforming literature, an ontological meditation on life and language, and a contribution to software studies. Digital poetry introduces animation, dimensionality and metadata into literary discourse. This necessitates new terminology; an acronym for Textual Audio-Visual Interactivity is proposed: Tavit. Tavits (malleable digital text) are tactile and responsive in ways that emulate living entities. They can possess dimensionality, memory, flocking, kinematics, surface reflectivity, collision detection, and responsiveness to touch, etc
. Life-like tactile tavits involve information that is not only semantic or syntactic, but also audible, imagistic and interactive. Reading mediated language-art requires an expanded set of critical, practical and discourse tools, and an awareness of the historical continuum that anticipates this expansion. The ontological and temporal design implications of tavits are supported with case-studies of two commercial typographic-animation softwares and one custom software (Mr Softie created at OBX Labs, Concordia) used during a research-creation process

    Scalable visualization of spatial data in 3D terrain

    Get PDF
    Designing visualizations of spatial data in 3D terrain is challenging because various heterogeneous data aspects need to be considered, including the terrain itself, multiple data attributes, and data uncertainty. It is hardly possible to visualize these data at full detail in a single image. Therefore, this thesis devises a scalable visualization approach that focuses on relevant information to be emphasized, while less-relevant information can be attenuated. In this context, a noval concept of visualizing spatial data in 3D terrain and different soft- and hardware solutions are proposed.Die Erstellung von Visualisierungen fĂŒr rĂ€umliche Daten im 3D-GelĂ€nde ist schwierig, da viele heterogene Datenaspekte wie das GelĂ€nde selbst, die verschiedenen Datenattribute sowie Unsicherheiten bei der Darstellung zu berĂŒcksichtigen sind. Im Allgemeinen ist es nicht möglich, diese Datenaspekte gleichzeitig in einer Visualisierung darzustellen. Daher werden in der Arbeit skalierbare Visualisierungsstrategien entwickelt, welche die wichtigen Informationen hervorheben und trotzdem gleichzeitig Kontextinformationen liefern. HierfĂŒr werden neue Systematisierungen und Konzepte vorgestellt

    Service Abstractions for Scalable Deep Learning Inference at the Edge

    Get PDF
    Deep learning driven intelligent edge has already become a reality, where millions of mobile, wearable, and IoT devices analyze real-time data and transform those into actionable insights on-device. Typical approaches for optimizing deep learning inference mostly focus on accelerating the execution of individual inference tasks, without considering the contextual correlation unique to edge environments and the statistical nature of learning-based computation. Specifically, they treat inference workloads as individual black boxes and apply canonical system optimization techniques, developed over the last few decades, to handle them as yet another type of computation-intensive applications. As a result, deep learning inference on edge devices still face the ever increasing challenges of customization to edge device heterogeneity, fuzzy computation redundancy between inference tasks, and end-to-end deployment at scale. In this thesis, we propose the first framework that automates and scales the end-to-end process of deploying efficient deep learning inference from the cloud to heterogeneous edge devices. The framework consists of a series of service abstractions that handle DNN model tailoring, model indexing and query, and computation reuse for runtime inference respectively. Together, these services bridge the gap between deep learning training and inference, eliminate computation redundancy during inference execution, and further lower the barrier for deep learning algorithm and system co-optimization. To build efficient and scalable services, we take a unique algorithmic approach of harnessing the semantic correlation between the learning-based computation. Rather than viewing individual tasks as isolated black boxes, we optimize them collectively in a white box approach, proposing primitives to formulate the semantics of the deep learning workloads, algorithms to assess their hidden correlation (in terms of the input data, the neural network models, and the deployment trials) and merge common processing steps to minimize redundancy
    • 

    corecore