5,385 research outputs found

    Look, cook, learn: A Recipe for improved functionality in cooking design

    Get PDF
    Look, Cook, Learn: A Recipe for Improved Functionality In Cooking Design is a thesis which explores the relationship and differences between print media and digital media. Both are studied in respect to instructional cooking materials with their strengths and weaknesses .The final application involves a digital application for the Apple iPad as well as a printed recipe book. The user experience is of great importance and therefor information layering, visual layout, and consumer expectations are all considered. All information that is included in both the digital application is broken into sections depending on the function or action that the user is trying to preform. The `Look\u27 sections refers to the user experience of browsing or searching for recipes. Key information that would be useful or critical in the searching process has been highlighted and emphasized in a strategic way to allow users to more easily find a recipe that they would be interested in preparing. The Cook\u27 section is designed so that once a user has selected a recipe that they wish to prepare, the process of actually executing the steps necessary are clear and understandable. Care was taken to make sure language, supplementary imagery, and layout all aided in the execution of each step. The Learn\u27 section is devoted to helping assist and educate users in kitchen practices such as nutrition, knife skills, substituting ingredients, and converting measurements. This section is handled differently in the print and digital application since the medium properties of each allows for a different user interaction. Overall this thesis is designed to help improve the functionality and usability of cooking materials by considering user experience, information design, and information layering

    Mapping and Semantic Perception for Service Robotics

    Get PDF
    Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /

    Evaluation of interactive installations for public spaces. Recommendations and case study.

    Get PDF
    This master’s thesis describes the evaluation test plan of an interactive installation designed for a public space, and the discussion of the issues arose during its implementation. Evaluation is a critical aspect of the iterative development process of any application or system. It allows creating better products, with less errors, and better achievement of the user’s requirements. It is even more important when the system is intended for a public space due to the problems associated with this kind of settings. The user and design requirements are more difficult to capture and to define, and at the same time the complexity of the evaluation is considerably increased. The evaluation experience described and the suggested recommendations might be of interest for any future attempt to develop and evaluate technology for a public space, as it is the case of the Milk Market project (the Recipe Station

    Food - Media - Senses: Interdisciplinary Approaches

    Get PDF
    Food is more than just nutrition. Its preparation, presentation and consumption is a multifold communicative practice which includes the meal's design and its whole field of experience. How is food represented in cookbooks, product packaging or in paintings? How is dining semantically charged? How is the sensuality of eating treated in different cultural contexts? In order to acknowledge the material and media-related aspects of eating as a cultural praxis, experts from media studies, art history, literary studies, philosophy, experimental psychology, anthropology, food studies, cultural studies and design studies share their specific approaches

    Seeing & Saying: Visual imaginings for disease causing genetic mutations

    Get PDF
    Using practice based research methodologies this thesis, Seeing & Saying: Visual imaginings for disease causing genetic mutations, explores the visual and linguistic narratives that emerge from the explanation of complex genetic diagnosis. The research, funded by the Arts & Humanities Research Council (AHRC), is being carried out in collaboration with the European Network of Excellence for rare inherited neuromuscular diseases (TREAT-NMD), coordinated by the Institute of Genetic Medicine at Newcastle University. TREAT-NMD is an international initiative funded by the European Commission linking leading clinicians, scientists, industrial partners and patient organisations in eleven countries. Located in this complex field of study, between the disciplines of art and science, this research project explores the contextual framework of the social and cultural histories that influence and give agency to the visual and text based metaphors that are used to depict and diagnose the specific genetic disease of Duchenne muscular dystrophy (DMD). The use of linguistic metaphors and visual imagery is commonplace when interpreting the how, what, why and where of DNA and it is these types of metaphorical communications that will form the basis of this investigation. This thesis interrogates and extends research methods and processes that develop from studio practice, scientific laboratories and text-based analysis thus creating a synergy between the scientific laboratory and the artist’s studio. This written thesis and the artworks produced are therefore both the narrative and the output of this collaborative relationship that represents a synthesis of the methodologies of art and science. By examining the communication between the network stakeholders of TREAT-NMD and studying how linguistic, visual and artefactual metaphors impact on the construction of technical explanations within this network, this thesis proposes that we can come closer to answering how we see and how we say genetic disease

    Learning Calibratable Policies using Programmatic Style-Consistency

    Get PDF
    We study the important and challenging problem of controllable generation of long-term sequential behaviors. Solutions to this problem would impact many applications, such as calibrating behaviors of AI agents in games or predicting player trajectories in sports. In contrast to the well-studied areas of controllable generation of images, text, and speech, there are significant challenges that are unique to or exacerbated by generating long-term behaviors: how should we specify the factors of variation to control, and how can we ensure that the generated temporal behavior faithfully demonstrates diverse styles? In this paper, we leverage large amounts of raw behavioral data to learn policies that can be calibrated to generate a diverse range of behavior styles (e.g., aggressive versus passive play in sports). Inspired by recent work on leveraging programmatic labeling functions, we present a novel framework that combines imitation learning with data programming to learn style-calibratable policies. Our primary technical contribution is a formal notion of style-consistency as a learning objective, and its integration with conventional imitation learning approaches. We evaluate our framework using demonstrations from professional basketball players and agents in the MuJoCo physics environment, and show that our learned policies can be accurately calibrated to generate interesting behavior styles in both domains

    Can Computers Create Art?

    Full text link
    This essay discusses whether computers, using Artificial Intelligence (AI), could create art. First, the history of technologies that automated aspects of art is surveyed, including photography and animation. In each case, there were initial fears and denial of the technology, followed by a blossoming of new creative and professional opportunities for artists. The current hype and reality of Artificial Intelligence (AI) tools for art making is then discussed, together with predictions about how AI tools will be used. It is then speculated about whether it could ever happen that AI systems could be credited with authorship of artwork. It is theorized that art is something created by social agents, and so computers cannot be credited with authorship of art in our current understanding. A few ways that this could change are also hypothesized.Comment: to appear in Arts, special issue on Machine as Artist (21st Century

    e-Learning cookbook. TPACK in professional development in higher education

    Get PDF
    Information and communication technology (ICT) makes it possible to bring information to everyone who wants to learn. Rapid advances in technology offer strong support for using ICT in teaching. Online education can intensify and improve students' learning process, and enables us to reach more students than by traditional means. The number of courses and modules being offered online is increasing rapidly worldwide. Although online education can reach more people nowadays and new and challenging learning experiences can be created with it, in the average university course the digital dimension too often remains limited to simply publishing the existing face-to-face course content online. It is crucial that lecturers have and can obtain knowledge about how to design technology-enhanced teaching. Technical advances can be expected to continue in the future, and those who wish to implement educational technology in their own teaching practice must reckon on becoming lifelong learners. This fits the culture of academic teachers perfectly: they are already lifelong learners and creators of new knowledge within their discipline. This book is based on the notion that a lecturer who uses ICT in teaching must learn how to apply his or her knowledge about content, pedagogy and technology in an integrated manner. The idea of integrating these three types of knowledge is based on the TPACK model, which stands for Technological Pedagogical Content Knowledge model. The material for this book was developed in a Dutch higher education innovation project known as MARCHET (Make Relevant Choices in Educational Technology, MARCHET, 2009-2011)
    corecore