8,401 research outputs found

    From a Competition for Self-Driving Miniature Cars to a Standardized Experimental Platform: Concept, Models, Architecture, and Evaluation

    Full text link
    Context: Competitions for self-driving cars facilitated the development and research in the domain of autonomous vehicles towards potential solutions for the future mobility. Objective: Miniature vehicles can bridge the gap between simulation-based evaluations of algorithms relying on simplified models, and those time-consuming vehicle tests on real-scale proving grounds. Method: This article combines findings from a systematic literature review, an in-depth analysis of results and technical concepts from contestants in a competition for self-driving miniature cars, and experiences of participating in the 2013 competition for self-driving cars. Results: A simulation-based development platform for real-scale vehicles has been adapted to support the development of a self-driving miniature car. Furthermore, a standardized platform was designed and realized to enable research and experiments in the context of future mobility solutions. Conclusion: A clear separation between algorithm conceptualization and validation in a model-based simulation environment enabled efficient and riskless experiments and validation. The design of a reusable, low-cost, and energy-efficient hardware architecture utilizing a standardized software/hardware interface enables experiments, which would otherwise require resources like a large real-scale test track.Comment: 17 pages, 19 figues, 2 table

    Mixed reality simulators

    Get PDF
    A dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science Johannesburg, May 2017.Virtual Reality (VR) is widely used in training simulators of dangerous or expensive vehicles such as aircraft or heavy mining machinery. The vehicles often have very complicated controls that users need to master before attempting to operate a real world version of the machine. VR allows users to safely train in a simulated environment without the risk of injury or damaging expensive equipment in the field. VR however visually cuts off the user from the real environment,whichmayobtainobstructions. Usersareunabletosafelymoveorgesturewhilewearing aVRheadset. Additionallyusersareunabletousestandardinputdevicessuchasmiceandkeyboards. Bymixinginaliveviewofthetherealworld,theusercanstillseeandinteractwiththe physical environment. The contribution of this research is presenting ways of using Mixed RealitytoenhancetheuserexperienceoftraditionalVRbasedsimulators. MixedRealityimproves on traditional VR simulators by allowing the user the safety and freedom of not being cut off from the real world, allowing interaction and the tactile feedback of interacting with complex physical controls, while still allowing simultaneous use of virtual controls and by adding a real world reference point to aid in diminishing simulator sickness caused by visual motionA dissertation submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of ScienceGR201

    Development of an intelligent personal assistant to empower operators in industry 4.0 environments

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáIndustry 4.0 brings a high level of automation to industrial environments and changes the way companies operate, both in operational aspects and in human relations. It is important to define the role of the millions of operators affected in this new socioeconomic paradigm, integrating new technologies and empowering the workforce to take advantage of aspects such as the flexibility and versatility that human operators bring to production lines. To advance the implementation of this objective, this work proposes the development of an intelligent personal assistant, using concepts of human-in-the-loop cyber-physical systems and context awareness, to assist operators during manufacturing tasks, providing the necessary information for the fulfillment of operations and verifying the accuracy to inform them about possible errors. The implementation is divided in two parts. The first part focuses on an application that supports the real-time operations that can be present in the industry, such as pick and place in warehouses and the assembly of complex equipment on an assembly line. Through an interface, the instruction is given and, using artificial vision techniques with images coming from an IntelRealsense camera, it verifies if the operation is being correctly performed. The gathering of this information occurs in a context awareness algorithm, fulfilling the requirement of intelligent personal assistant and providing feedback to the operator so that the tasks are performed with efficiency and lower incidence of errors. The second part includes the training of these operators in an immersive environment through a virtual reality equipment such as the Oculus Go. The immersive scenario, developed in Unity3D, uses as a model the real workbench, bringing the possibility of performing these trainings in any environment and excluding the need to use real equipment, which could be damaged by the user’s inexperience. The results achieved during the validation tests performed in these two parts, commenting on the strengths, challenges and failures found in the system in general. These results are also qualitatively compared with traditional applications of the proposed case studies in order to prove the fulfillment of the objectives proposed in this work. Finally, the usability test is presented, which provides data on weak points in the user experience for possible improvements in future work.A indústria 4.0 traz um nível elevado de automação a ambientes industriais e muda a forma em que empresas funcionam, tanto em aspectos operacionais quanto em relações humanas. É importante a definição do papel dos milhões de operadores afetados neste novo paradigma socioeconômico, fazendo a integração das novas tecnologias e capacitando a mão de obra para fazer proveito de aspectos como a flexibilidade e versatilidade que operadores humanos trazem às linhas de produção. Para avançar a implementação deste objetivo, este trabalho propõe o desenvolvimento de uma assistente pessoal inteligente, utilizando conceitos de human-in-the-loop cyberphysical systems e context awareness, para auxiliar operadores durante tarefas de manufatura, provendo informações necessárias para o cumprimento de operações e verificando a acurácia para informá-lo sobre possíveis erros. A implementação está dividida em duas partes. A primeira parte foca em uma aplicação de operações em tempo real que podem estar presentes na indústria como pick-andplace em armazéns e a montagem de equipamentos complexos em uma linha de montagem. Através de uma interface é dada a instrução a ser realizada e, utilizando técnicas de visão artificial, com imagens vindas de uma câmera IntelRealsense, verifica se a operação está sendo corretamente executada. A junção dessas informações ocorre em um algoritmo de context awareness, cumprindo o requisito de assistente pessoal inteligente e fornecendo o feedback ao operador para que as tarefas sejam realizadas com eficiência e menor incidência de erros. Já a segunda parte engloba o treinamento destes operadores em um ambiente imersivo através de um equipamento de realidade virtual como o Oculus Go. O cenário, desenvolvido no Unity3D, utiliza como modelo a bancada real, trazendo a possibilidade de se realizar esses treinamentos em qualquer ambiente, excluindo a necessidade da utilização de equipamentos reais e possíveis danos originados de inexperiência do usuário. Os resultados apresentam os testes de validação realizados nestas duas partes, comentando os pontos fortes, desafios e falhas encontradas no sistema em geral. Estes resultados também são comparados qualitativamente com aplicações tradicionais dos casos de estudo propostos de forma a comprovar o cumprimento dos objetivos propostos neste trabalho. Por fim, é apresentado o teste de usabilidade que fornece dados em pontos fracos na experiência de usuários para possíveis melhorias em futuros trabalhos

    Communicating Wave Energy: An Active Learning Experience For Students

    Get PDF
    We have conducted an education project to communicate the wave energy concept to high school students. A virtual reality system that combines both hardware and software is developed in this project to simulate the buoy-wave interaction. This first-of-its-kind wave energy unit is portable and physics-based, allowing students to conduct a number of hands-on activities. This system is the core component of an educational experience that integrates demonstration and hands-on learning, with an aim of introducing the wave energy conversion process to students in an interactive environment. Presentations have been made at two different high schools with diverse student populations, and students involved in this project rated very positively about their learning experience. As revealed by their feedback, the virtual environment and its combination with the hardware are the most important factors that help students to appreciate the knowledge in the wave energy conversion process

    A novel haptic model and environment for maxillofacial surgical operation planning and manipulation

    Get PDF
    This paper presents a practical method and a new haptic model to support manipulations of bones and their segments during the planning of a surgical operation in a virtual environment using a haptic interface. To perform an effective dental surgery it is important to have all the operation related information of the patient available beforehand in order to plan the operation and avoid any complications. A haptic interface with a virtual and accurate patient model to support the planning of bone cuts is therefore critical, useful and necessary for the surgeons. The system proposed uses DICOM images taken from a digital tomography scanner and creates a mesh model of the filtered skull, from which the jaw bone can be isolated for further use. A novel solution for cutting the bones has been developed and it uses the haptic tool to determine and define the bone-cutting plane in the bone, and this new approach creates three new meshes of the original model. Using this approach the computational power is optimized and a real time feedback can be achieved during all bone manipulations. During the movement of the mesh cutting, a novel friction profile is predefined in the haptical system to simulate the force feedback feel of different densities in the bone

    Detecting Attacks in CyberManufacturing Systems: Additive Manufacturing Example

    Full text link
    CyberManufacturing System is a vision for future manufacturing where physical components are fully integrated with computational processes in a connected environment. However, realizing the vision requires that its security be adequately ensured. This paper presents a vision-based system to detect intentional attacks on additive manufacturing processes, utilizing machine learning techniques. Particularly, additive manufacturing systems have unique vulnerabilities to malicious attacks, which can result in defective infills but without affecting the exterior. In order to detect such infill defects, the research uses simulated 3D printing process images as well as actual 3D printing process images to compare accuracies of machine learning algorithms in classifying, clustering and detecting anomalies on different types of infills. Three algorithms - (i) random forest, (ii) k nearest neighbor, and (iii) anomaly detection - have been adopted in the research and shown to be effective in detecting such defects

    Human robot collaboration in the MTA SZTAKI learning factory facility at Gyor

    Get PDF
    In recent years, interest has grown in environments where humans and robots collaborate, complementing the strengths and advantages of humans and machines. Design, construction and adjustment of such environments, as well as the training of operating personnel, requires thorough understanding of the nature of human robot collaboration which previous automation expertise does not necessarily provide. The learning factory currently being constructed by MTA SZTAKI in Gyor aims to provide hands-on experience in the design and operation of facilities supporting human robot collaboration, mainly in assembly tasks. The work-in progress paper presents design principles, functionalities and structure of the facility, and outlines deployment plans in education, training, research and development in the academic and industrial sectors. (C) 2018 The Authors. Published by Elsevier B.V

    A knowledge based approach to integration of products, processes and reconfigurable automation resources

    Get PDF
    The success of next generation automotive companies will depend upon their ability to adapt to ever changing market trends thus becoming highly responsive. In the automotive sector, the assembly line design and reconfiguration is an especially critical and extremely complex job. The current research addresses some of the aspects of this activity under the umbrella of a larger ongoing research project called Business Driven Automation (BDA) project. The BDA project aims to carry out complete virtual 3D modeling-based verifications of the assembly line for new or revised products in contrast to the prevalent practice of manual evaluation of effects of product change on physical resources. [Continues.
    corecore