460 research outputs found

    Quantum ESPRESSO: a modular and open-source software project for quantum simulations of materials

    Get PDF
    Quantum ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). Quantum ESPRESSO stands for "opEn Source Package for Research in Electronic Structure, Simulation, and Optimization". It is freely available to researchers around the world under the terms of the GNU General Public License. Quantum ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively-parallel architectures, and a great effort being devoted to user friendliness. Quantum ESPRESSO is evolving towards a distribution of independent and inter-operable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.Comment: 36 pages, 5 figures, resubmitted to J.Phys.: Condens. Matte

    Human-document interaction systems: a new frontier for document image analysis

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document image analysis techniques with a range of complementary technologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational applications.Peer ReviewedPostprint (author's final draft

    ISAR: Ein Autorensystem fĂĽr Interaktive Tische

    Get PDF
    Developing augmented reality systems involves several challenges, that prevent end users and experts from non-technical domains, such as education, to experiment with this technology. In this research we introduce ISAR, an authoring system for augmented reality tabletops targeting users from non-technical domains. ISAR allows non-technical users to create their own interactive tabletop applications and experiment with the use of this technology in domains such as educations, industrial training, and medical rehabilitation.Die Entwicklung von Augmented-Reality-Systemen ist mit mehreren Herausforderungen verbunden, die Endbenutzer und Experten aus nicht-technischen Bereichen, wie z.B. dem Bildungswesen, daran hindern, mit dieser Technologie zu experimentieren. In dieser Forschung stellen wir ISAR vor, ein Autorensystem für Augmented-Reality-Tabletops, das sich an Benutzer aus nicht-technischen Bereichen richtet. ISAR ermöglicht es nicht-technischen Anwendern, ihre eigenen interaktiven Tabletop-Anwendungen zu erstellen und mit dem Einsatz dieser Technologie in Bereichen wie Bildung, industrieller Ausbildung und medizinischer Rehabilitation zu experimentieren

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume

    A Portable Augmented Reality Science Laboratory

    Get PDF
    Augmented Reality (AR) is a technology which overlays virtual objects on the real world; generates three-dimensional (3D) virtual objects and provides an interactive interface which people can work in the real world and interact with 3D virtual objects at the same time. AR has the potential to engage and motivate learners to explore material from a variety of differing perspective, and has been shown to be particularly useful for teaching subject matter that students could not possibly experience first hand in the real world. This report provides a conceptual framework of a simulated augmented reality lab which could be used in teaching science in classrooms. The recent years, the importance of lab-based courses and its significant role in the science education is irrefutable. The use of AR in formal education could prove a key component in future learning environments that are richly populated with a blend of hardware and software applications. The aim of this project is to enhance the teaching and learning of science by complementing the existing traditional lab with the use of a simulated augmented reality lab. The system architecture and the technical aspects of the proposed project will be described. Implementation issues and benefits of the proposed AR Lab will be highlighted

    Fusion of gaze with hierarchical image segmentation for robust object detection

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 41).We present Flycatcher, a prototype system illustrating the idea of gaze- based image processing in the context of object segmentation for wearable photography. The prototype includes a wearable eye tracking device that captures real-time eyetraces of a user, and a wearable video camera that captures first-person perspective images of the user's visual environment. The system combines the deliberate eyetraces of the user with hierarchical image segmentation applied to scene images to achieve reliable object segmentation. In evaluations with certain classes of real-world images, fusion of gaze and image segmentation information led to higher object detection accuracy than either signal alone. Flycatcher may be integrated with assistive communication devices, enabling individuals with severe motor impairments to use eye control to communicate about objects in their environment. The system also represents a promising step toward an eye-driven interface for "copy and paste" visual memory augmentation in wearable computing applications.by Jeffrey M. Bartelma.M.Eng

    Flights in my hands : coherence concerns in designing Strip'TIC, a tangible space for air traffic controllers

    Get PDF
    Best Paper Honorable Mention awardInternational audienceWe reflect upon the design of a paper-based tangible interactive space to support air traffic control. We have observed, studied, prototyped and discussed with controllers a new mixed interaction system based on Anoto, video projection, and tracking. Starting from the understanding of the benefits of tangible paper strips, our goal is to study how mixed physical and virtual augmented data can support the controllers' mental work. The context of the activity led us to depart from models that are proposed in tangible interfaces research where coherence is based on how physical objects are representative of virtual objects. We propose a new account of coherence in a mixed interaction system that integrates externalization mechanisms. We found that physical objects play two roles: they act both as representation of mental objects and as tangible artifacts for interacting with augmented features. We observed that virtual objects represent physical ones, and not the reverse, and, being virtual representations of physical objects, should seamlessly converge with the cognitive role of the physical object. Finally, we show how coherence is achieved by providing a seamless interactive space

    BRAHMS: Novel middleware for integrated systems computation

    Get PDF
    Biological computational modellers are becoming increasingly interested in building large, eclectic models, including components on many different computational substrates, both biological and non-biological. At the same time, the rise of the philosophy of embodied modelling is generating a need to deploy biological models as controllers for robots in real-world environments. Finally, robotics engineers are beginning to find value in seconding biomimetic control strategies for use on practical robots. Together with the ubiquitous desire to make good on past software development effort, these trends are throwing up new challenges of intellectual and technological integration (for example across scales, across disciplines, and even across time) - challenges that are unmet by existing software frameworks. Here, we outline these challenges in detail, and go on to describe a newly developed software framework, BRAHMS. that meets them. BRAHMS is a tool for integrating computational process modules into a viable, computable system: its generality and flexibility facilitate integration across barriers, such as those described above, in a coherent and effective way. We go on to describe several cases where BRAHMS has been successfully deployed in practical situations. We also show excellent performance in comparison with a monolithic development approach. Additional benefits of developing in the framework include source code self-documentation, automatic coarse-grained parallelisation, cross-language integration, data logging, performance monitoring, and will include dynamic load-balancing and 'pause and continue' execution. BRAHMS is built on the nascent, and similarly general purpose, model markup language, SystemML. This will, in future, also facilitate repeatability and accountability (same answers ten years from now), transparent automatic software distribution, and interfacing with other SystemML tools. (C) 2009 Elsevier Ltd. All rights reserved

    Augmented reality over maps

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaMaps and Geographic Information System (GIS) play a major role in modern society, particularly on tourism, navigation and personal guidance. However, providing geographical information of interest related to individual queries remains a strenuous task. The main constraints are (1) the several information scales available, (2) the large amount of information available on each scale, and (3) difficulty in directly infer a meaningful geographical context from text, pictures, or diagrams that are used by most user-aiding systems. To that extent, and to overcome the aforementioned difficulties, we develop a solution which allows the overlap of visual information over the maps being queried — a method commonly referred to as Augmented Reality (AR). With that in mind, the object of this dissertation is the research and implementation of a method for the delivery of visual cartographic information over physical (analogue) and digital two-dimensional (2D) maps utilizing AR. We review existing state-of-art solutions and outline their limitations across different use cases. Afterwards, we provide a generic modular solution for a multitude of real-life applications, to name a few: museums, fairs, expositions, and public street maps. During the development phase, we take into consideration the trade-off between speed and accuracy in order to develop an accurate and real-time solution. Finally, we demonstrate the feasibility of our methods with an application on a real use case based on a map of the city of Oporto, in Portugal.Mapas e Sistema de Informação Geográfica (GIS) desempenham um papel importante na sociedade, particularmente no turismo, navegação e orientação pessoal. No entanto, fornecer informações geográficas de interesse a consultas dos utilizadores é uma tarefa árdua. Os principais dificuldades são (1) as várias escalas de informações disponíveis, (2) a grande quantidade de informação disponível em cada escala e (3) dificuldade em inferir diretamente um contexto geográfico significativo a partir dos textos, figuras ou diagramas usados. Assim, e para superar as dificuldades mencionadas, desenvolvemos uma solução que permite a sobreposição de informações visuais sobre os mapas que estão a ser consultados - um método geralmente conhecido como Realidade Aumentada (AR). Neste sentido, o objetivo desta dissertação é a pesquisa e implementação de um método para a visualização de informações cartográficas sobre mapas 2D físicos (analógicos) e digitais utilizando AR. Em primeiro lugar, analisamos o estado da arte juntamente com as soluções existentes e também as suas limitações nas diversas utilizações possíveis. Posteriormente, fornecemos uma solução modular genérica para uma várias aplicações reais tais como: museus, feiras, exposições e mapas públicos de ruas. Durante a fase de desenvolvimento, tivemos em consideração o compromisso entre velocidade e precisão, a fim de desenvolver uma solução precisa que funciona em tempo real. Por fim, demonstramos a viabilidade de nossos métodos com uma aplicação num caso de uso real baseado num mapa da cidade do Porto (Portugal)

    Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art

    Get PDF
    Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration. In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places. We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments
    • …
    corecore