1,850 research outputs found

    Waterless Light Fountain

    Get PDF
    The Waterless light fountain is born from the need for art projects for Franklin street in Santa Clara University than is going to be remodeled into an arts and history paseo, one of the ideas discussed was to determine if there may be a way to develop a waterless light fountain that could be displayed either on a temporary or permanent basis. In this paper, we will explain the design process and build of the fountain, using Neopixels (RGB LEDs) to generate the light used as the main art element. An Arduino is used as the control unit, controlling the Neopixels and receiving input data from the user interface, composed by arcade buttons and a LCD screen with a mounted touchscreen, where users can interact with the fountain by choosing between four different modes of operation, including light shows where LEDs light up in preprogramed sequences, some games as Simon-says or choosing the color of each LED with the help of the touchscreen and light shows combined with music through the performance of a spectrum analyzer controlled by a raspberry pi where the LEDs dance with the music. In the end, the fountain is going to be placed in the engineering building as the design is not minted to endurance the outdoors conditions

    Master Hand Technology For The HMI Using Hand Gesture And Colour Detection

    Get PDF
    Master Hand Technology uses different hand gestures and colors to give various commands for the Human-Machine(here Computer) Interfacing. Gestures recognition deals with the goal of interpreting human gestures via mathematical algorithm. Gestures made by users with the help of a color band and/or body pose , in two or three dimensions , get translated by software/image processing into predefined commands .The computer then acts according to the command. There have been a lot work already developed in this field either by extracting hand gesture only or extracting hand with the help of color segmentation. In this project, both hand gesture extraction and color detection used for better, faster, robust, accurate and real-time applications. Red, Green, Blue colors are most efficiently detected if RGB color space used. Using HSV color space, it can be extended to any no of colors. For hand gesture detection, the default background is captured and stored for further processing. Comparing the new captured image with background image and doing necessary extraction and filtering, hand portion can be extracted. Then applying different mathematical algorithms different hand gestures detected. All this work done using MATLAB software. By interfacing a portion of Master hand or/and color to mouse of a Computer, the computer can be controlled same as the mouse. And then many virtual (Augmented reality) or PC based application can be developed (e.g. Calculator, Paint). It does not matter whether the system is within your reach or not; but a camera that is linked with the system must have to be near-by . Showing different gestures by your Master-Hand , the computer can be controlled remotely. If the camera can be set-up online, then the computer can be controlled even from a very far place online

    Probe-based visual analysis of geospatial simulations

    Get PDF
    This work documents the design, development, refinement, and evaluation of probes as an interaction technique for expanding both the usefulness and usability of geospatial visualizations, specifically those of simulations. Existing applications that allow the visualization of, and interaction with, geospatial simulations and their results generally present views of the data that restrict the user to a single perspective. When zoomed out, local trends and anomalies become suppressed and lost; when zoomed in, spatial awareness and comparison between regions become limited. The probe-based interaction model integrates coordinated visualizations within individual probe interfaces, which depict the local data in user-defined regions-of-interest. It is especially useful when dealing with complex simulations or analyses where behavior in various localities differs from other localities and from the system as a whole. The technique has been incorporated into a number of geospatial simulations and visualization tools. In each of these applications, and in general, probe-based interaction enhances spatial awareness, improves inspection and comparison capabilities, expands the range of scopes, and facilitates collaboration among multiple users. The great freedom afforded to users in defining regions-of-interest can cause modifiable areal unit problems to affect the reliability of analyses without the user’s knowledge, leading to misleading results. However, by automatically alerting the user to these potential issues, and providing them tools to help adjust their selections, these unforeseen problems can be revealed, and even corrected

    Persona adaptable visualization scheduling in supply chain management for an ERP system

    Get PDF
    Estágio realizado na MicrosoftTese de mestrado integrado. Engenharia Informática e Computação. Faculdade de Engenharia. Universidade do Porto. 200

    Interactional Slingshots: Providing Support Structure to User Interactions in Hybrid Intelligence Systems

    Full text link
    The proliferation of artificial intelligence (AI) systems has enabled us to engage more deeply and powerfully with our digital and physical environments, from chatbots to autonomous vehicles to robotic assistive technology. Unfortunately, these state-of-the-art systems often fail in contexts that require human understanding, are never-before-seen, or complex. In such cases, though the AI-only approaches cannot solve the full task, their ability to solve a piece of the task can be combined with human effort to become more robust to handling complexity and uncertainty. A hybrid intelligence system—one that combines human and machine skill sets—can make intelligent systems more operable in real-world settings. In this dissertation, we propose the idea of using interactional slingshots as a means of providing support structure to user interactions in hybrid intelligence systems. Much like how gravitational slingshots provide boosts to spacecraft en route to their final destinations, so do interactional slingshots provide boosts to user interactions en route to solving tasks. Several challenges arise: What does this support structure look like? How much freedom does the user have in their interactions? How is user expertise paired with that of the machine’s? To do this as a tractable socio-technical problem, we explore this idea in the context of data annotation problems, especially in those domains where AI methods fail to solve the overall task. Getting annotated (labeled) data is crucial for successful AI methods, and becomes especially more difficult in domains where AI fails, since problems in such domains require human understanding to fully solve, but also present challenges related to annotator expertise, annotation freedom, and context curation from the data. To explore data annotation problems in this space, we develop techniques and workflows whose interactional slingshot support structure harnesses the user’s interaction with data. First, we explore providing support in the form of nudging non-expert users’ interactions as they annotate text data for the task of creating conversational memory. Second, we add support structure in the form of assisting non-expert users during the annotation process itself for the task of grounding natural language references to objects in 3D point clouds. Finally, we supply support in the form of guiding expert and non-expert users both before and during their annotations for the task of conversational disentanglement across multiple domains. We demonstrate that building hybrid intelligence systems with each of these interactional slingshot support mechanisms—nudging, assisting, and guiding a user’s interaction with data—improves annotation outcomes, such as annotation speed, accuracy, effort level, even when annotators’ expertise and skill levels vary. Thesis Statement: By providing support structure that nudges, assists, and guides user interactions, it is possible to create hybrid intelligence systems that enable more efficient (faster and/or more accurate) data annotation.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163138/1/sairohit_1.pd

    ShapeCanvas: an exploration of shape-changing content generation by members of the public

    Get PDF
    Shape-changing displays--visual output surfaces with physically-reconfigurable geometry---provide new challenges for content generation. Content design must incorporate visual elements, physical surface shape, react to user input, and adapt these parameters over time. The addition of the ‘shape channel’ significantly increases the complexity of content design, but provides a powerful platform for novel physical design, animations, and physicalizations. In this work we use ShapeCanvas, a 4×4 grid of large actuated pixels, combined with simple interactions, to explore novice user behavior and interactions for shape-change content design. We deployed ShapeCanvas in a café for two and a half days and observed users generate 21 physical animations. These were categorized into seven categories and eight directly derived from people’s personal interest. This paper describes these experiences, the generated animations and provides initial insights into shape- changing content design

    Physical sketching tools and techniques for customized sensate surfaces

    Get PDF
    Sensate surfaces are a promising avenue for enhancing human interaction with digital systems due to their inherent intuitiveness and natural user interface. Recent technological advancements have enabled sensate surfaces to surpass the constraints of conventional touchscreens by integrating them into everyday objects, creating interactive interfaces that can detect various inputs such as touch, pressure, and gestures. This allows for more natural and intuitive control of digital systems. However, prototyping interactive surfaces that are customized to users' requirements using conventional techniques remains technically challenging due to limitations in accommodating complex geometric shapes and varying sizes. Furthermore, it is crucial to consider the context in which customized surfaces are utilized, as relocating them to fabrication labs may lead to the loss of their original design context. Additionally, prototyping high-resolution sensate surfaces presents challenges due to the complex signal processing requirements involved. This thesis investigates the design and fabrication of customized sensate surfaces that meet the diverse requirements of different users and contexts. The research aims to develop novel tools and techniques that overcome the technical limitations of current methods and enable the creation of sensate surfaces that enhance human interaction with digital systems.Sensorische Oberflächen sind aufgrund ihrer inhärenten Intuitivität und natürlichen Benutzeroberfläche ein vielversprechender Ansatz, um die menschliche Interaktionmit digitalen Systemen zu verbessern. Die jüngsten technologischen Fortschritte haben es ermöglicht, dass sensorische Oberflächen die Beschränkungen herkömmlicher Touchscreens überwinden, indem sie in Alltagsgegenstände integriert werden und interaktive Schnittstellen schaffen, die diverse Eingaben wie Berührung, Druck, oder Gesten erkennen können. Dies ermöglicht eine natürlichere und intuitivere Steuerung von digitalen Systemen. Das Prototyping interaktiver Oberflächen, die mit herkömmlichen Techniken an die Bedürfnisse der Nutzer angepasst werden, bleibt jedoch eine technische Herausforderung, da komplexe geometrische Formen und variierende Größen nur begrenzt berücksichtigt werden können. Darüber hinaus ist es von entscheidender Bedeutung, den Kontext, in dem diese individuell angepassten Oberflächen verwendet werden, zu berücksichtigen, da eine Verlagerung in Fabrikations-Laboratorien zum Verlust ihres ursprünglichen Designkontextes führen kann. Zudem stellt das Prototyping hochauflösender sensorischer Oberflächen aufgrund der komplexen Anforderungen an die Signalverarbeitung eine Herausforderung dar. Diese Arbeit erforscht dasDesign und die Fabrikation individuell angepasster sensorischer Oberflächen, die den diversen Anforderungen unterschiedlicher Nutzer und Kontexte gerecht werden. Die Forschung zielt darauf ab, neuartigeWerkzeuge und Techniken zu entwickeln, die die technischen Beschränkungen derzeitigerMethoden überwinden und die Erstellung von sensorischen Oberflächen ermöglichen, die die menschliche Interaktion mit digitalen Systemen verbessern

    Proceedings, MSVSCC 2017

    Get PDF
    Proceedings of the 11th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 20, 2017 at VMASC in Suffolk, Virginia. 211 pp

    Procedural modeling of cities with semantic information for crowd simulation

    Get PDF
    En aquesta tesi de màster es presenta un sistema per a la generació procedural de ciutats poblades. Avui en dia poblar entorns virtuals grans tendeix a ser una tasca que requereix molt d’esforç i temps, i típicament la feina d’artistes o programadors experts. Amb aquest sistema es vol proporcionar una eina que permeti als usuaris generar entorns poblats d’una manera més fàcil i ràpida, mitjançat l’ús de tècniques procedurals. Les contribucions principals inclouen: la generació d’una ciutat virtual augmentada semànticament utilitzant modelat procedural basat en gramàtiques de regles, la generació dels seus habitants virtuals utilitzant dades estadístiques reals, i la generació d’agendes per a cada individu utilitzant també un mètode procedural basat en regles, el qual combina la informació semàntica de la ciutat amb les característiques i necessitats dels agents autònoms. Aquestes agendes individuals són usades per a conduir la simulació dels habitants, i poden incloure regles com a tasques d’alt nivell, l’avaluació de les quals es realitza al moment de començar-les. Això permet simular accions que depenguin del context, i interaccions amb altres agents.En esta tesis de máster se presenta un sistema para la generación procedural de ciudades pobladas. Hoy en día poblar entornos virtuales grandes tiende a ser una tarea que requiere de mucho tiempo y esfuerzo, y típicamente el trabajo de artistas o programadores expertos. Con este sistema se pretende proporcionar una herramienta que permita a los usuarios generar entornos poblados de un modo más fácil y rápido, mediante el uso de técnicas procedurales. Las contribuciones principales incluyen: la generación de una ciudad virtual aumentada semánticamente utilizando modelado procedural basado en gramáticas de reglas, la generación de sus habitantes virtuales utilizando datos estadísticos reales, y la generación de agendas para cada individuo utilizando también un método procedural basado en reglas, el cual combina la información semántica de la ciudad con las características y necesidades de los agentes autónomos. Estas agendas individuales son usadas para conducir la simulación de los habitantes, y pueden incluir reglas como tareas de alto nivel, la evaluación de las cuales se realiza cuando empiezan. Esto permite simular acciones que dependan del contexto, e interacciones con otros agentes.In this master thesis a framework for procedural generation of populated cities is presented. Nowadays, the population of large virtual environments tends to be a time-consuming task, usually requiring the work of expert artists or programmers. With this system we aim at providing a tool that can allow users to generate populated environments in an easier and faster way, by relying on the usage of procedural techniques. Our main contributions include: a generation of semantically augmented virtual cities using procedural modelling based on rule grammars, a generation of a virtual population using real-world data, and a generation of agendas for each individual inhabitant by using a procedural rule-based approach, which combines the city semantics with the autonomous agents characteristics and needs. The individual agendas are then used to drive a crowd simulation in the environment, and may include high-level rule tasks whose evaluation is delayed until they get triggered. This feature allows us to simulate context-dependant actions and interactions with other agents
    corecore