6,722 research outputs found

    How 5G wireless (and concomitant technologies) will revolutionize healthcare?

    Get PDF
    The need to have equitable access to quality healthcare is enshrined in the United Nations (UN) Sustainable Development Goals (SDGs), which defines the developmental agenda of the UN for the next 15 years. In particular, the third SDG focuses on the need to “ensure healthy lives and promote well-being for all at all ages”. In this paper, we build the case that 5G wireless technology, along with concomitant emerging technologies (such as IoT, big data, artificial intelligence and machine learning), will transform global healthcare systems in the near future. Our optimism around 5G-enabled healthcare stems from a confluence of significant technical pushes that are already at play: apart from the availability of high-throughput low-latency wireless connectivity, other significant factors include the democratization of computing through cloud computing; the democratization of Artificial Intelligence (AI) and cognitive computing (e.g., IBM Watson); and the commoditization of data through crowdsourcing and digital exhaust. These technologies together can finally crack a dysfunctional healthcare system that has largely been impervious to technological innovations. We highlight the persistent deficiencies of the current healthcare system and then demonstrate how the 5G-enabled healthcare revolution can fix these deficiencies. We also highlight open technical research challenges, and potential pitfalls, that may hinder the development of such a 5G-enabled health revolution

    Cognitive Task Planning for Smart Industrial Robots

    Get PDF
    This research work presents a novel Cognitive Task Planning framework for Smart Industrial Robots. The framework makes an industrial mobile manipulator robot Cognitive by applying Semantic Web Technologies. It also introduces a novel Navigation Among Movable Obstacles algorithm for robots navigating and manipulating inside a firm. The objective of Industrie 4.0 is the creation of Smart Factories: modular firms provided with cyber-physical systems able to strong customize products under the condition of highly flexible mass-production. Such systems should real-time communicate and cooperate with each other and with humans via the Internet of Things. They should intelligently adapt to the changing surroundings and autonomously navigate inside a firm while moving obstacles that occlude free paths, even if seen for the first time. At the end, in order to accomplish all these tasks while being efficient, they should learn from their actions and from that of other agents. Most of existing industrial mobile robots navigate along pre-generated trajectories. They follow ectrified wires embedded in the ground or lines painted on th efloor. When there is no expectation of environment changes and cycle times are critical, this planning is functional. When workspaces and tasks change frequently, it is better to plan dynamically: robots should autonomously navigate without relying on modifications of their environments. Consider the human behavior: humans reason about the environment and consider the possibility of moving obstacles if a certain goal cannot be reached or if moving objects may significantly shorten the path to it. This problem is named Navigation Among Movable Obstacles and is mostly known in rescue robotics. This work transposes the problem on an industrial scenario and tries to deal with its two challenges: the high dimensionality of the state space and the treatment of uncertainty. The proposed NAMO algorithm aims to focus exploration on less explored areas. For this reason it extends the Kinodynamic Motion Planning by Interior-Exterior Cell Exploration algorithm. The extension does not impose obstacles avoidance: it assigns an importance to each cell by combining the efforts necessary to reach it and that needed to free it from obstacles. The obtained algorithm is scalable because of its independence from the size of the map and from the number, shape, and pose of obstacles. It does not impose restrictions on actions to be performed: the robot can both push and grasp every object. Currently, the algorithm assumes full world knowledge but the environment is reconfigurable and the algorithm can be easily extended in order to solve NAMO problems in unknown environments. The algorithm handles sensor feedbacks and corrects uncertainties. Usually Robotics separates Motion Planning and Manipulation problems. NAMO forces their combined processing by introducing the need of manipulating multiple objects, often unknown, while navigating. Adopting standard precomputed grasps is not sufficient to deal with the big amount of existing different objects. A Semantic Knowledge Framework is proposed in support of the proposed algorithm by giving robots the ability to learn to manipulate objects and disseminate the information gained during the fulfillment of tasks. The Framework is composed by an Ontology and an Engine. The Ontology extends the IEEE Standard Ontologies for Robotics and Automation and contains descriptions of learned manipulation tasks and detected objects. It is accessible from any robot connected to the Cloud. It can be considered a data store for the efficient and reliable execution of repetitive tasks; and a Web-based repository for the exchange of information between robots and for the speed up of the learning phase. No other manipulation ontology exists respecting the IEEE Standard and, regardless the standard, the proposed ontology differs from the existing ones because of the type of features saved and the efficient way in which they can be accessed: through a super fast Cascade Hashing algorithm. The Engine lets compute and store the manipulation actions when not present in the Ontology. It is based on Reinforcement Learning techniques that avoid massive trainings on large-scale databases and favors human-robot interactions. The overall system is flexible and easily adaptable to different robots operating in different industrial environments. It is characterized by a modular structure where each software block is completely reusable. Every block is based on the open-source Robot Operating System. Not all industrial robot controllers are designed to be ROS-compliant. This thesis presents the method adopted during this research in order to Open Industrial Robot Controllers and create a ROS-Industrial interface for them

    ILAT (Software as a Service): Interactive Learning Application Tool for Autism Screening and Assessment in children with Autism Spectrum Disorder

    Get PDF
    Autism is a type of neurological disorder usually noticeable during the early stage of childhood, especially between one to three years and occurs in all social groups. The common problem experienced by the autism subjects includes lack in social interaction, poor communication skill, overexcited, unable to express their emotions. While these disorders are not fully curable, early detection can reduce the severity with proper therapy. Even though there are no appropriate medications and treatments, still we can improve the lifestyle of the autism subject through various supportive therapies. If this disorder is not detected at early stages, the severity rate may probably increase during the later stage.  Developing countries like India witness 0.2 percentage of the autism population in the overall community based on the information provided by the Rehabilitation Council of India. Express growth in the Information and Communication Technologies allows developing various assistive tools to enhance the lifestyle of the autism people. Fourth Generation Technologies like the Internet of Things, Wearable Devices, Cloud Computing, Big Data Analytics, and Artificial Intelligence, Mobile devices, Location-aware technology, Sensors, Augmented and Virtual Reality together provide a smart solution to all the sufferers. The objective of Interactive Learning Application Tool is used for Autism Screening and Assessment in children with Autism Spectrum Disorder and extended to explore the assistive technologies available to serve the community. This will enhance the social interaction, learning and communication skills in children, a tool for analysing the aggressive level, a tool for caregivers and supportive and ranking tool for psychiatrist dealing with autism subject

    Developing an Autonomous Mobile Robotic Device for Monitoring and Assisting Older People

    Get PDF
    A progressive increase of the elderly population in the world has required technological solutions capable of improving the life prospects of people suffering from senile dementias such as Alzheimer's. Socially Assistive Robotics (SAR) in the research field of elderly care is a solution that can ensure, through observation and monitoring of behaviors, their safety and improve their physical and cognitive health. A social robot can autonomously and tirelessly monitor a person daily by providing assistive tasks such as remembering to take medication and suggesting activities to keep the assisted active both physically and cognitively. However, many projects in this area have not considered the preferences, needs, personality, and cognitive profiles of older people. Moreover, other projects have developed specific robotic applications making it difficult to reuse and adapt them on other hardware devices and for other different functional contexts. This thesis presents the development of a scalable, modular, multi-tenant robotic application and its testing in real-world environments. This work is part of the UPA4SAR project ``User-centered Profiling and Adaptation for Socially Assistive Robotics''. The UPA4SAR project aimed to develop a low-cost robotic application for faster deployment among the elderly population. The architecture of the proposed robotic system is modular, robust, and scalable due to the development of functionality in microservices with event-based communication. To improve robot acceptance the functionalities, enjoyed through microservices, adapt the robot's behaviors based on the preferences and personality of the assisted person. A key part of the assistance is the monitoring of activities that are recognized through deep neural network models proposed in this work. The final experimentation of the project carried out in the homes of elderly volunteers was performed with complete autonomy of the robotic system. Daily care plans customized to the person's needs and preferences were executed. These included notification tasks to remember when to take medication, tasks to check if basic nutrition activities were accomplished, entertainment and companionship tasks with games, videos, music for cognitive and physical stimulation of the patient

    In-home and remote use of robotic body surrogates by people with profound motor deficits

    Get PDF
    By controlling robots comparable to the human body, people with profound motor deficits could potentially perform a variety of physical tasks for themselves, improving their quality of life. The extent to which this is achievable has been unclear due to the lack of suitable interfaces by which to control robotic body surrogates and a dearth of studies involving substantial numbers of people with profound motor deficits. We developed a novel, web-based augmented reality interface that enables people with profound motor deficits to remotely control a PR2 mobile manipulator from Willow Garage, which is a human-scale, wheeled robot with two arms. We then conducted two studies to investigate the use of robotic body surrogates. In the first study, 15 novice users with profound motor deficits from across the United States controlled a PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a simulated self-care task. Participants achieved clinically meaningful improvements on the ARAT and 12 of 15 participants (80%) successfully completed the simulated self-care task. Participants agreed that the robotic system was easy to use, was useful, and would provide a meaningful improvement in their lives. In the second study, one expert user with profound motor deficits had free use of a PR2 in his home for seven days. He performed a variety of self-care and household tasks, and also used the robot in novel ways. Taking both studies together, our results suggest that people with profound motor deficits can improve their quality of life using robotic body surrogates, and that they can gain benefit with only low-level robot autonomy and without invasive interfaces. However, methods to reduce the rate of errors and increase operational speed merit further investigation.Comment: 43 Pages, 13 Figure

    Autonomous Navigation for Mobile Robots in Crowded Environments

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Blind guide: anytime, anywhere

    Get PDF
    Sight dominates our mental life, more than any other sense. Even when we are just thinking about something the world, we end imagining what looks like. This rich visual experience is part of our lives. People need the vision for two complementary reasons. One of them is vision give us the knowledge to recognize objects in real time. The other reason is vision provides us the control one need to move around and interact with objects. Eyesight helps people to avoid dangers and navigate in our world. Blind people usually have enhanced accuracy and sensibility of their other natural senses to sense their surroundings. But sometimes this is not enough because the human senses can be affected by external sources of noise or disease. Without any foreign aid or device, sightless cannot navigate in the world. Many assistive tools have been developed to help blind people. White canes or guide dogs help blind in their navigation. Each device has their limitation. White canes cannot detect head level obstacles, drop-offs, and obstructions over a meter away. The training of a guide dog takes a long time, almost five years in some cases. The sightless also needs training and is not a solution for everybody. Taking care of a guide dog can be expensive and time consuming. Humans have developed technology for helping us in every aspect of our lives. The primary goal of technology is helping people to improve their quality of life. Technology can assist us with our limitations. Wireless sensor networks is a technology that has been used to help people with disabilities. In this dissertation, the author proposes a system based on this technology called Blind Guide. Blind Guide is an artifact that helps blind people to navigate in indoors or outdoors scenarios. The prototype is portable assuring that can be used anytime and anywhere. The system is composed of wireless sensors that can be used in different parts of the body. The sensors detect an obstacle and inform the user with an audible warning providing a safety walk to the users. A great feature about Blind Guide is its modularity. The system can adapt to the needs of the user and can be used in a combination with other solution. For example, Blind Guide can be used in conjunction with the white cane. The white cane detects obstacles below waist level and a Blind Guide wireless sensor in the forehead can detect obstacles at the head level. This feature is important because some sightless people feel uncomfortable without the white cane. The system is scalable giving us the opportunity to create a network of interconnected Blind Guide users. This network can store the exact location and description of the obstacles found by the users. This information is public for all users of this system. This feature reduces the time required for obstacle detection and consequent energy savings, thus increasing the autonomy of the solution. One of the main requirements for the development of this prototype was to design a low-cost solution that can be accessible for anyone around the world. All the components of the solution can provide a low-cost solution, easily obtainable and at a low cost. Technology makes our life easier and it must be available for anyone. Modularity, portability, scalability, the possibility to work in conjunction with other solutions, detecting objects that other solutions cannot, obstacle labeling, a network of identified obstacles and audible warnings are the main aspects of the Blind Guide system. All these aspects makes Blind Guide an anytime, anywhere solution for blind people. Blind Guide was tested with a group of volunteers. The volunteers were sightless and from different ages. The trials performed to the system show us positive results. The system successfully detected incoming obstacles and informed in real time to its users. The volunteers gave us a positive feedback telling that they felt comfortable using the prototype and they believe that the system can help them with their daily routine
    corecore