6,179 research outputs found
Continuous maintenance and the future â Foundations and technological challenges
High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle âbig dataâ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security
Recommended from our members
Education in the Wild: Contextual and Location-Based Mobile Learning in Action. A Report from the STELLAR Alpine Rendez-Vous Workshop Series
Recommended from our members
Introduction to location-based mobile learning
[About the book]
The report follows on from a 2-day workshop funded by the STELLAR Network of Excellence as part of their 2009 Alpine Rendez-Vous workshop series and is edited by Elizabeth Brown with a foreword from Mike Sharples. Contributors have provided examples of innovative and exciting research projects and practical applications for mobile learning in a location-sensitive setting, including the sharing of good practice and the key findings that have resulted from this work. There is also a debate about whether location-based and contextual learning results in shallower learning strategies and a section detailing the future challenges for location-based learning
Recommended from our members
Augmenting the field experience: a student-led comparison of techniques and technologies
In this study we report on our experiences of creating and running a student fieldtrip exercise which allowed students to compare a range of approaches to the design of technologies for augmenting landscape scenes. The main study site is around Keswick in the English Lake District, Cumbria, UK, an attractive upland environment popular with tourists and walkers. The aim of the exercise for the students was to assess the effectiveness of various forms of geographic information in augmenting real landscape scenes, as mediated through a range of techniques and technologies. These techniques were: computer-generated acetate overlays showing annotated wireframe views from certain key points; a custom-designed application running on a PDA; a mediascape running on the mScape software on a GPS-enabled mobile phone; Google Earth on a tablet PC; and a head-mounted in-field Virtual Reality system. Each group of students had all five techniques available to them, and were tasked with comparing them in the context of creating a visitor guide to the area centred on the field centre. Here we summarise their findings and reflect upon some of the broader research questions emerging from the project
Safe, Remote-Access Swarm Robotics Research on the Robotarium
This paper describes the development of the Robotarium -- a remotely
accessible, multi-robot research facility. The impetus behind the Robotarium is
that multi-robot testbeds constitute an integral and essential part of the
multi-agent research cycle, yet they are expensive, complex, and time-consuming
to develop, operate, and maintain. These resource constraints, in turn, limit
access for large groups of researchers and students, which is what the
Robotarium is remedying by providing users with remote access to a
state-of-the-art multi-robot test facility. This paper details the design and
operation of the Robotarium as well as connects these to the particular
considerations one must take when making complex hardware remotely accessible.
In particular, safety must be built in already at the design phase without
overly constraining which coordinated control programs the users can upload and
execute, which calls for minimally invasive safety routines with provable
performance guarantees.Comment: 13 pages, 7 figures, 3 code samples, 72 reference
Explainable shared control in assistive robotics
Shared control plays a pivotal role in designing assistive robots to complement human capabilities during everyday tasks. However, traditional shared control relies on users forming an accurate mental model of expected robot behaviour. Without this accurate mental image, users may encounter confusion or frustration whenever their actions do not elicit the intended system response, forming a misalignment between the respective internal models of the robot and human. The Explainable Shared Control paradigm introduced in this thesis attempts to resolve such model misalignment by jointly considering assistance and transparency.
There are two perspectives of transparency to Explainable Shared Control: the human's and the robot's. Augmented reality is presented as an integral component that addresses the human viewpoint by visually unveiling the robot's internal mechanisms. Whilst the robot perspective requires an awareness of human "intent", and so a clustering framework composed of a deep generative model is developed for human intention inference.
Both transparency constructs are implemented atop a real assistive robotic wheelchair and tested with human users. An augmented reality headset is incorporated into the robotic wheelchair and different interface options are evaluated across two user studies to explore their influence on mental model accuracy. Experimental results indicate that this setup facilitates transparent assistance by improving recovery times from adverse events associated with model misalignment. As for human intention inference, the clustering framework is applied to a dataset collected from users operating the robotic wheelchair. Findings from this experiment demonstrate that the learnt clusters are interpretable and meaningful representations of human intent.
This thesis serves as a first step in the interdisciplinary area of Explainable Shared Control. The contributions to shared control, augmented reality and representation learning contained within this thesis are likely to help future research advance the proposed paradigm, and thus bolster the prevalence of assistive robots.Open Acces
Mass-Market Receiver for Static Positioning: Tests and Statistical Analyses
Nowadays, there are several low cost GPS receivers able to provide both pseudorange and carrier phase measurements in the L1band, that allow to have good realtime performances in outdoor condition. The present paper describes a set of dedicated tests in order to evaluate the positioning accuracy in static conditions. The quality of the pseudorange and the carrier phase measurements let hope for interesting results. The use of such kind of receiver could be extended to a large number of professional applications, like engineering fields: survey, georeferencing, monitoring, cadastral mapping and cadastral road. In this work, the receivers performance is verified considering a single frequency solution trying to fix the phase ambiguity, when possible. Different solutions are defined: code, float and fix solutions. In order to solve the phase ambiguities different methods are considered. Each test performed is statistically analyzed, highlighting the effects of different factors on precision and accurac
Bridging the physical and virtual with mobile media
This thesis examines how mobile technologies can contribute towards bridging physical and virtual space through interactive, and location-specific, media experiences. Building on a research analysis of contextual discussions and precedents, it is noticeable that there is a discord between physical and virtual space usage as they are often utilised in different situational settings. This thesis therefore develops a mobile application as a wider investigation into how the physical setting and live data can be used to achieve a better link for contextualised content between the physical and virtual in urban areas. It explores this by making a location specific media experience, where the limits of the physical space are incorporated as boundaries in the virtual environment. Further to this, live data is used to influence the dynamics of the environment so that conditions are reflective of the physical world. These investigations are utilised with Augmented Reality, providing an end application that allows the viewer to physically explore urban space within an interactive mobile media experience. This approach offers a new perspective in urban space exploration and mobile media design, highlighting that contextual significance in media experiences are important aspects to consider and design for. Ultimately, such approaches may lead to larger narratives and experiences encompassing entire cities, or other diverse geographies
Contagion Design: Labour, Economy, Habits, Data
How is contagion designed? How do labour, migration, habits and data configure contagion? Analyzing the current conjuncture through these vectors, this book critically addresses issues of rising unemployment, restricted movement, increasing governance of populations through data systems and the compulsory redesign of habits. Design logics underscore both biological contagion and political technologies. Contagion is redesigning how labour and migration are differentially governed, experienced and indeed produced. Habits generate modes of exposure and protection from contagion and become a resource for managing biological and social life. Data turns contagion into models that make a virus actionable and calculable. New modes of sociality and collaboration provoke forms of contagious mutuality. But can the logic of pre-emption and prediction ever accommodate and control the contingencies of a virus? Taken as a whole, the essays in this small book explore these issues and their implications for cultural, social and political research of biotechnical conditions. If contagion never abandons the scene of the present, if it persists as a constitutive force in the production of social life, how might we redesign the viral as the friend we love to hate
Smart Assistive Technology for People with Visual Field Loss
Visual field loss results in the lack of ability to clearly see objects in the surrounding environment, which affects the ability to determine potential hazards. In visual field loss, parts of the visual field are impaired to varying degrees, while other parts may remain healthy. This defect can be debilitating, making daily life activities very stressful. Unlike blind people, people with visual field loss retain some functional vision. It would be beneficial to intelligently augment this vision by adding computer-generated information to increase the users' awareness of possible hazards by providing early notifications. This thesis introduces a smart hazard attention system to help visual field impaired people with their navigation using smart glasses and a real-time hazard classification system. This takes the form of a novel, customised, machine learning-based hazard classification system that can be integrated into wearable assistive technology such as smart glasses. The proposed solution provides early notifications based on (1) the visual status of the user and (2) the motion status of the detected object. The presented technology can detect multiple objects at the same time and classify them into different hazard types. The system design in this work consists of four modules: (1) a deep learning-based object detector to recognise static and moving objects in real-time, (2) a Kalman Filter-based multi-object tracker to track the detected objects over time to determine their motion model, (3) a Neural Network-based classifier to determine the level of danger for each hazard using its motion features extracted while the object is in the user's field of vision, and (4) a feedback generation module to translate the hazard level into a smart notification to increase user's cognitive perception using the healthy vision within the visual field. For qualitative system testing, normal and personalised defected vision models were implemented. The personalised defected vision model was created to synthesise the visual function for the people with visual field defects. Actual central and full-field test results were used to create a personalised model that is used in the feedback generation stage of this system, where the visual notifications are displayed in the user's healthy visual area. The proposed solution will enhance the quality of life for people suffering from visual field loss conditions. This non-intrusive, wearable hazard detection technology can provide obstacle avoidance solution, and prevent falls and collisions early with minimal information
- âŚ