263 research outputs found

    The Impact of Artificial Intelligence and Deep Learning in Eye Diseases: A Review

    Get PDF
    Artificial intelligence (AI) is a subset of computer science dealing with the development and training of algorithms that try to replicate human intelligence. We report a clinical overview of the basic principles of AI that are fundamental to appreciating its application to ophthalmology practice. Here, we review the most common eye diseases, focusing on some of the potential challenges and limitations emerging with the development and application of this new technology into ophthalmology

    A gesture-based robot program building software

    Get PDF
    With the advent of intelligent systems, industrial workstations and working areas have undergone a revolution. The increased need for automation is satisfied using high-performance industrial robots in fully automated workstations. In the manufacturing industry, sophisticated tasks still require human intervention in completely manual workstations, even if at a slower production rate. To improve the efficiency of manual workstations, Collaborative Robots (Co-Bots) have been designed as part of the Industry 4.0 paradigm. These robots collaborate with humans in safe environments to support the workers in their tasks, thus achieving higher production rates compared to completely manual workstations. The key factor is that their adoption relieves humans from stressful and heavy operations, decreasing job-related health issues. The drawback of Co-Bots stands in their design: to work side-by-side with humans they must guarantee safety; thus, they have very strict limitations on their forces and velocities, which limits their efficiency, especially when performing non-trivial tasks. To overcome these limitations, our idea is to design Meta-Collaborative workstations (MCWs), where the robot can operate behind a safety cage, either physical or virtual, and the operator can interact with the robot, either industrial or Collaborative, by means of the same communication channel. Our proposed system has been developed to easily build robot programs purposely designed for MCWs, based on (i) the recognition of hand gestures (using a vision-based communication channel) and (ii) ROS to carry out communication with the robot

    A vision-based teleoperation system for robotic systems

    Get PDF
    Despite advances in robotic perception are increasing autonomous capabilities, human intelligence is still considered a necessity in unstructured or unpredictable environments. Hence, also according to the Industry 4.0 paradigm, humans and robots are encouraged to achieve mutual Human-Robot Interaction (HRI). HRI can be physical (pHRI) or not, depending on the assigned task. For example, when the robot is constrained in a dangerous environment or must handle hazardous materials, pHRI is not recommended. In these cases, robot teleoperation may be necessary. A teleoperation system concerns with the exploration and exploitation of spaces where the user presence is not allowed. Therefore, the operator needs to move the robot remotely. Although plenty of human-machine interfaces for teleoperation have been developed considering a mechanical device, vision-based interfaces do not require physical contact with external devices. This grants a more natural and intuitive interaction, which is reflected in task performance. Our proposed system is a novel robot teleoperation system that exploits RGB cameras, which are easy to use and commonly available on the market at a reduced price. A ROS-based framework has been developed to supply hand tracking and hand-gesture recognition features, exploiting the OpenPose software based on the Deep Learning framework Caffe. This, in combination with the ease of availability of an RGB camera, leads the framework to be strongly open-source-oriented and highly replicable on all ROS-based platforms. It is worth noting that the system does not include the Z-axis control in this first version. This is due to the high precision and sensitivity required to robustly control the third axis, a precision that 3D vision systems are not able to provide unless very expensive devices are adopted. Our aim is to further develop the system to include the third axis control in a future release

    Validation of a smart mirror for gesture recognition in gym training performed by a vision-based deep learning system

    Get PDF
    This paper illustrates the development and validation of a smart mirror for sports training. The application is based on the skeletonization algorithm MediaPipe and runs on an embedded device Nvidia Jetson Nano equipped with two fisheye cameras. The software has been evaluated considering the exercise biceps curl. The elbow angle has been measured by both MediaPipe and the motion capture system BTS (ground truth), and the resulting values have been compared to determine angle uncertainty, residual errors, and intra-subject and inter-subject repeatability. The uncertainty of the joints’ estimation and the quality of the image captured by the cameras reflect on the final uncertainty of the indicator over time, highlighting the areas of improvement for further development

    Validazione di algoritmi di calibrazione estrinseca basati su skeletonization del corpo umano

    Get PDF
    La presente memoria descrive le procedure utilizzate per la valutazione metrologica di procedure di calibrazione estrinseca di sistemi di visione composti da più telecamere. Viene definita calibrazione estrinseca quella procedura che consente di calcolare posizione ed orientamento di ogni telecamera presente in un sistema multicamera rispetto a tutte le altre. I metodi di calibrazione estrinseca si possono dividere principalmente in tre gruppi: tradizionali, basati sul riconoscimento di forme tridimensionali e basati su skeletonization. I metodi di calibrazione tradizionali si basano sull’utilizzo di target di calibrazione noti (scacchiere, griglie di punti, frange, etc) che vengono riconosciuti automaticamente dal sistema. Il sistema misura la posizione dei punti caratteristici del target ottenendo in questo modo i parametri di rotazione e traslazione desiderati. I metodi basati sul riconoscimento di forme tridimensionali (3D shape matching) sono invece fondati sulla coerenza geometrica di un oggetto 3D posizionato nel campo di vista delle varie telecamere: ciascun dispositivo registra una parte dell’oggetto target e successivamente, allineando ciascuna vista con le rimanenti, ed analizzando la traiettoria dell’oggetto vista da ogni telecamera è possibile risalire alle matrici di calibrazione. I metodi di calibrazione tradizionali, così come quelli basati su 3D shape matching risultano svantaggiosi in termini di tempo di esecuzione. Inoltre, queste tipologie necessitano di un target di calibrazione. Infine, i metodi basati sul riconoscimento dello scheletro umano (skeleton-based) utilizzano come target di calibrazione direttamente le articolazioni (joint) di un operatore che si posiziona all’interno del campo di vista delle telecamere. I metodi skeleton-based rappresentano quindi un’evoluzione dei metodi di 3D shape matching in quanto è come se venissero considerate forme 3D multiple rappresentate dai segmenti corporei dell’operatore stesso. Risulta quindi possibile ottenere una calibrazione estrinseca senza alcun oggetto caratteristico, ma semplicemente utilizzando il corpo dell’operatore umano come oggetto stesso. Nonostante in letteratura siano presenti lavori relativi alla valutazione dell’accuratezza nella misura dei joint, non sono presenti lavori che mostrano come questa accuratezza venga propagata a livello di matrici di rototraslazione risultanti dalla procedura di calibrazione. Il presente lavoro descrive le procedure utilizzate per valutare l’affidabilità della calibrazione estrinseca ottenuta tramite le posizioni dei joint misurate tramite il metodo di skeletonization descritto in [3]

    Benchmark tra FPGA e GPU embedded per modelli di Deep Learning

    Get PDF
    Le soluzioni embedded sul mercato per applicazioni di intelligenza artificiale sono sempre più frequenti, accessibili a prezzi che vanno sempre più riducendosi. Se le FPGA sono una piattaforma già affermata e tipicamente prestante, le GPU di contro sono state introdotte solo recentemente nel mercato embedded, proprio grazie alla forte spinta che la ricerca ha dato nell’ambito delle applicazioni di Deep Learning e intelligenza artificiale. In questo lavoro abbiamo valutato le prestazioni di riconoscimento di un modello di rete neurale di convoluzione rispettivamente su una FPGA Xilinx, ZCU104 e su una GPU embedded Nvidia Jetson TX2

    Use of Soy-Based Formulas and Cow's Milk Allergy: Lights and Shadows.

    Get PDF
    oybean (Glycine max) is a species of legume native to East Asia and used in childhood diet for over 2,000 years in the East. Soy protein formulas have been available for almost a century. Nowadays, the increase in cow's milk allergy and vegetarian dietary preferences are driving consumers toward cow's milk alternatives. In this paper, we reviewed the nutritional composition of soy-based infant formula and discussed their possible use in pediatric age, mainly focusing on prevention and treatment of cow's milk allergy. Protein quality is determined by digestibility and amino acid content. Purified or concentrated vegetable proteins (e.g., soy protein and gluten) have high digestibility (>95%), similar to those of animal ones. For some intact vegetable products (e.g., whole cereals and pulses), protein digestibility is lower (80-90%). Food processing and heat treatment also influence protein digestibility. Considering these data, we tried to evaluate the possible use of soybean and derivatives in pediatric age, including the nutritional composition of soy formulas and the clinical indications for their use. Moreover, since plant-based beverages are being perceived as healthy by consumers and their use is growing on the market, we recommend that soy drink should not be used as a substitute for infant formulas or cow's milk in children younger than 24 months
    • …
    corecore