276 research outputs found

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    UAV Cloud Platform for Precision Farming

    Get PDF
    A new application for Unmanned Aerial Vehicles comes to light daily to solve some of modern society’s problems. One of the mentioned predicaments is the possibility for optimization in agricultural processes. Due to this, a new area arose in the last years of the twentieth century, and it is in constant progression called Precision Farming. Nowadays, a division of this field growth is relative to Unmanned Aerial Vehicles applications. Most traditional methods employed by farmers are ineffective and do not aid in the progression and solution of these issues. However, there are some fields that have the possibility to enhance many agriculture methods, such fields are Cyber-Physical Systems and Cloud Computing. Given its capabilities like aerial surveillance and mapping, Cyber- Physical Systems like Unmanned Aerial Vehicles are being used to monitor vast crops, to gather insightful data thatwould take a lot more time if being collected by hand. However, these systems typically lack computing power and storage capacity, meaning that much of its gathered data cannot be stored and further analyzed locally. That is the obstacle that Cloud Computing can solve. With the possibility to offload computing power by sending the collected data to a cloud, it is possible to leverage the enormous computing power and storage capabilities of remote data-centers to gather and analyze these datasets. This dissertation proposes an architecture for this use case by leveraging the advantages of Cloud Computing to aid the obstacles of Unmanned Aerial Vehicles. Moreover, this dissertation is a collaboration with an on-going Horizon 2020 European project that deals with precision farming and agriculture enhanced by Cyber-Physical Systems.A cada dia que passa, novas aplicações para Veículos aéreos não tripulados são inventadas, de forma a resolver alguns dos problemas actuais da sociedade. Um desses problemas, é a possibilidade de otimização em processos agrículas. Devido a isto, nos últimos anos do século 20 nasceu uma nova área de investigação intitulada Agricultura de alta precisão. Hoje em dia, uma secção desta área diz respeito à inovação nas aplicações com recurso a Veículos aéreos não tripulados. A maioria dos métodos tradicionais usados por agricultores são ineficientes e não auxiliam nem a evolução nem a resolução destes problemas. Contudo, existem algumas áreas científicas que permitem a evoluçao de algumos métodos agrículas, estas áreas são os Sistemas Ciber-Físicos e a Computação na Nuvem. Dadas as suas capacidades tais como a vigilância e mapeamento aéreo, certos Sistemas Ciber-Físicos como os Veículos aéreos não tripulados estão a ser usados para monitorizar vastas culturas de forma a recolher dados que levariam muito mais tempo caso fossem recolhidos manualmente. No entanto, estes sistemas geralmente não detêm grandes capacidades de computação e armazenamento, o que significa que muitos dos dados recolhidos não podem ser armazenados e analisados localmente. É aí que a Computação na Nuvem é útil, com a possibilidade de enviar estes dados para uma nuvem, é possível aproveitar o enorme poder de computação e os recursos de armazenamento dos datacenters remotos para armazenar e analisar estes conjuntos de dados. Esta dissertação propõe uma arquitetura para este caso de uso ao fazer uso das vantagens da Computação na Nuvem de forma a combater os obstáculos dos Veículos aéreos não tripulados. Além disso, esta dissertação é também uma colaboração com um projecto Europeu Horizonte 2020 na área da Agricultura de alta precisão com recurso a Veículos aéreos não tripulados

    Enhancing the test and evaluation process: implementing agile development, test automation, and model-based systems engineering concepts

    Get PDF
    2020 Fall.Includes bibliographical references.With the growing complexity of modern systems, traditional testing methods are falling short. Test documentation suites used to verify the software for these types of large, complex systems can become bloated and unclear, leading to extremely long execution times and confusing, unmanageable test procedures. Additionally, the complexity of these systems can prevent the rapid understanding of complicated system concepts and behaviors, which is a necessary part of keeping up with the demands of modern testing efforts. Opportunities for optimization and innovation exist within the Test and Evaluation (T&E) domain, evidenced by the emergence of automated testing frameworks and iterative testing methodologies. Further opportunities lie with the directed expansion and application of related concepts such as Model-Based Systems Engineering (MBSE). This dissertation documents the development and implementation of three methods of enhancing the T&E field when applied to a real-world project. First, the development methodology of the system was transitioned from Waterfall to Agile, providing a more responsive approach when creating new features. Second, the Test Automation Framework (TAF) was developed, enabling the automatic execution of test procedures. Third, a method of test documentation using the Systems Modeling Language (SysML) was created, adopting concepts from MBSE to standardize the planning and analysis of test procedures. This dissertation provides the results of applying the three concepts to the development process of an airborne Electronic Warfare Management System (EWMS), which interfaces with onboard and offboard aircraft systems to receive and process the threat environment, providing the pilot or crew with a response solution for the protection of the aircraft. This system is representative of a traditional, long-term aerospace project that has been constantly upgraded over its lifetime. Over a two-year period, this new process produced a number of qualitative and quantitative results, including improving the quality and organization of the test documentation suite, reducing the minimum time to execute the test procedures, enabling the earlier identification of defects, and increasing the overall quality of the system under test. The application of these concepts generated many lessons learned, which are also provided. Transitioning a project's development methodology, modernizing the test approach, and introducing a new system of test documentation may provide significant benefits to the development of a system, but these types of process changes must be weighed against the needs of the project. This dissertation provides details of the effort to improve the effectiveness of the T&E process on an example project, as a framework for possible implementation on similar systems

    Pre-validation of SoC via hardware and software co-simulation

    Get PDF
    Abstract. System-on-chips (SoCs) are complex entities consisting of multiple hardware and software components. This complexity presents challenges in their design, verification, and validation. Traditional verification processes often test hardware models in isolation until late in the development cycle. As a result, cooperation between hardware and software development is also limited, slowing down bug detection and fixing. This thesis aims to develop, implement, and evaluate a co-simulation-based pre-validation methodology to address these challenges. The approach allows for the early integration of hardware and software, serving as a natural intermediate step between traditional hardware model verification and full system validation. The co-simulation employs a QEMU CPU emulator linked to a register-transfer level (RTL) hardware model. This setup enables the execution of software components, such as device drivers, on the target instruction set architecture (ISA) alongside cycle-accurate RTL hardware models. The thesis focuses on two primary applications of co-simulation. Firstly, it allows software unit tests to be run in conjunction with hardware models, facilitating early communication between device drivers, low-level software, and hardware components. Secondly, it offers an environment for using software in functional hardware verification. A significant advantage of this approach is the early detection of integration errors. Software unit tests can be executed at the IP block level with actual hardware models, a task previously only possible with costly system-level prototypes. This enables earlier collaboration between software and hardware development teams and smoothens the transition to traditional system-level validation techniques.Järjestelmäpiirin esivalidointi laitteiston ja ohjelmiston yhteissimulaatiolla. Tiivistelmä. Järjestelmäpiirit (SoC) ovat monimutkaisia kokonaisuuksia, jotka koostuvat useista laitteisto- ja ohjelmistokomponenteista. Tämä monimutkaisuus asettaa haasteita niiden suunnittelulle, varmennukselle ja validoinnille. Perinteiset varmennusprosessit testaavat usein laitteistomalleja eristyksissä kehityssyklin loppuvaiheeseen saakka. Tämän myötä myös yhteistyö laitteisto- ja ohjelmistokehityksen välillä on vähäistä, mikä hidastaa virheiden tunnistamista ja korjausta. Tämän diplomityön tavoitteena on kehittää, toteuttaa ja arvioida laitteisto-ohjelmisto-yhteissimulointiin perustuva esivalidointimenetelmä näiden haasteiden ratkaisemiseksi. Menetelmä mahdollistaa laitteiston ja ohjelmiston varhaisen integroinnin, toimien luonnollisena välietappina perinteisen laitteistomallin varmennuksen ja koko järjestelmän validoinnin välillä. Yhteissimulointi käyttää QEMU suoritinemulaattoria, joka on yhdistetty rekisterinsiirtotason (RTL) laitteistomalliin. Tämä mahdollistaa ohjelmistokomponenttien, kuten laiteajureiden, suorittamisen kohdejärjestelmän käskysarja-arkkitehtuurilla (ISA) yhdessä kellosyklitarkkojen RTL laitteistomallien kanssa. Työ keskittyy kahteen yhteissimulaation pääsovellukseen. Ensinnäkin se mahdollistaa ohjelmiston yksikkötestien suorittamisen laitteistomallien kanssa, varmistaen kommunikaation laiteajurien, matalan tason ohjelmiston ja laitteistokomponenttien välillä. Toiseksi se tarjoaa ympäristön ohjelmiston käyttämiseen toiminnallisessa laitteiston varmennuksessa. Merkittävä etu tästä lähestymistavasta on integraatiovirheiden varhainen havaitseminen. Ohjelmiston yksikkötestejä voidaan suorittaa jo IP-lohkon tasolla oikeilla laitteistomalleilla, mikä on aiemmin ollut mahdollista vain kalliilla järjestelmätason prototyypeillä. Tämä mahdollistaa aikaisemman ohjelmisto- ja laitteistokehitystiimien välisen yhteistyön ja helpottaa siirtymistä perinteisiin järjestelmätason validointimenetelmiin

    A framework for flexible integration in robotics and its applications for calibration and error compensation

    Get PDF
    Robotics has been considered as a viable automation solution for the aerospace industry to address manufacturing cost. Many of the existing robot systems augmented with guidance from a large volume metrology system have proved to meet the high dimensional accuracy requirements in aero-structure assembly. However, they have been mainly deployed as costly and dedicated systems, which might not be ideal for aerospace manufacturing having low production rate and long cycle time. The work described in this thesis is to provide technical solutions to improve the flexibility and cost-efficiency of such metrology-integrated robot systems. To address the flexibility, a software framework that supports reconfigurable system integration is developed. The framework provides a design methodology to compose distributed software components which can be integrated dynamically at runtime. This provides the potential for the automation devices (robots, metrology, actuators etc.) controlled by these software components to be assembled on demand for various assembly applications. To reduce the cost of deployment, this thesis proposes a two-stage error compensation scheme for industrial robots that requires only intermittent metrology input, thus allowing for one expensive metrology system to be used by a number of robots. Robot calibration is employed in the first stage to reduce the majority of robot inaccuracy then the metrology will correct the residual errors. In this work, a new calibration model for serial robots having a parallelogram linkage is developed that takes into account both geometric errors and joint deflections induced by link masses and weight of the end-effectors. Experiments are conducted to evaluate the two pieces of work presented above. The proposed framework is adopted to create a distributed control system that implements calibration and error compensation for a large industrial robot having a parallelogram linkage. The control system is formed by hot-plugging the control applications of the robot and metrology used together. Experimental results show that the developed error model was able to improve the 3 positional accuracy of the loaded robot from several millimetres to less than one millimetre and reduce half of the time previously required to correct the errors by using only the metrology. The experiments also demonstrate the capability of sharing one metrology system to more than one robot

    Framework for controlling automation devices based on gestures

    Get PDF
    Nowadays people's routine is becoming more and more fulfilled, when we are at home or in the office we have constant activities, appointments, and meetings. With the growth of technology, there are several ways it can help and even replace people in certain tasks. Nevertheless, many mundane tasks are not yet possible to be done only by a computer or a robot, and machines and people must work together to achieve the objective. Specifically, devices such as sensors or actuators have become very common in our daily life and they are a great help to have a more comfortable environment in our homes or our work, once they allow us to control the lighting, air conditioning, or multimedia, etc. With this in mind, it is important to develop adaptive interfaces that can instantly adjust to the needs and conditions of each user, making people’s activities more efficient. This dissertation presents a framework that performs the integration of human actions, being gestures and/or poses to activate and control devices that have the standard KNX protocol. A pose detection algorithm is used to detect different gestures/poses, where each gesture or group of gestures integrated with KNX allows easy and universal communication with various types of existing automation devices. The algorithm has standard gestures/poses which are obtained by making comparisons between the coordinates (key points) obtained by the pose estimation, comparing the key points, for example, of the wrist and shoulder for detection of a gesture where the user raises the arm vertically. In addition to standard gestures/poses, it is possible to carry out training so that the algorithm can learn new gestures and, in this way, being adaptive for each type of user. This way, with the detection of the user's gesture/pose, an interaction is then made with the home automation different types of equipment by KNX protocol, each user's gesture performs an interaction on the equipment, such as activating, deactivating, changing its intensity or its mode of operation. The results show that the framework is capable of effortlessly controlling different devices with different functionalities.Atualmente a rotina das pessoas está cada vez mais preenchida, quando estamos em casa ou no escritório cumprindo atividades, compromissos e reuniões constantemente. Com o crescimento da tecnologia, existem várias maneiras de auxiliar e até mesmo substituir as pessoas em certas tarefas. No entanto, muitas tarefas mundanas ainda não são possíveis de serem realizadas apenas por um computador ou um robô, e é necessário que a máquina e a pessoa trabalhem juntas para atingir o objetivo. Especificamente, dispositivos como sensores ou atuadores tornaram-se muito comuns no nosso dia a dia e são de grande ajuda para termos um ambiente mais confortável nas nossas casas ou no nosso trabalho, uma vez que nos permitem controlar diferentes sistemas e equipamentos como a iluminação, ar condicionado ou multimídia, etc. Com isso em mente, é importante desenvolver interfaces adaptativas que possam se ajustar instantaneamente às necessidades e condições de cada utilizador, tornando as atividades das pessoas mais simples e eficientes. Esta dissertação apresenta um framework que utiliza a integração de ações humanas, gestos e/ou poses para ativar e controlar dispositivos de automação que possuam o protocolo padrão KNX. O algortimo possui gestos/poses standard onde são obtidos realizando comparações entre as coordenadas (pontos chave - keypoints) obtidas pela pose estimation, comparando os keypoints, por exemplo, do pulso e do ombro para uma deteção de um gesto onde o utilizador levanta o braço verticalmente. Além de gestos/poses standard é possivel realizar o treinamento para que o algortimo aprenda novos gestos e, desta forma, sendo adaptativo para cada tipo de utilizador. Deste modo, com a detecção de gesto/pose do utilizador é então feita uma interação com os equipamentos de domotica por protocolo KNX, cada gesto do utilizador realiza uma interação nos equipamentos, como ativar, desativar, alterar sua intensidade ou seu modo de funcionamento. Os resultados mostram que o framework é capaz de controlar facilmente diferentes dispositivos com diferentes funcionalidades

    Telescience Testbed Pilot Program

    Get PDF
    The Telescience Testbed Pilot Program is developing initial recommendations for requirements and design approaches for the information systems of the Space Station era. During this quarter, drafting of the final reports of the various participants was initiated. Several drafts are included in this report as the University technical reports
    corecore