26 research outputs found

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    An Impulse Detection Methodology and System with Emphasis on Weapon Fire Detection

    Get PDF
    This dissertation proposes a methodology for detecting impulse signatures. An algorithm with specific emphasis on weapon fire detection is proposed. Multiple systems in which the detection algorithm can operate, are proposed. In order for detection systems to be used in practical application, they must have high detection performance, minimizing false alarms, be cost effective, and utilize available hardware. Most applications require real time processing and increased range performance, and some applications require detection from mobile platforms. This dissertation intends to provide a methodology for impulse detection, demonstrated for the specific application case of weapon fire detection, that is intended for real world application, taking into account acceptable algorithm performance, feasible system design, and practical implementation. The proposed detection algorithm is implemented with multiple sensors, allowing spectral waveband versatility in system design. The proposed algorithm is also shown to operate at a variety of video frame rates, allowing for practical design using available common, commercial off the shelf hardware. Detection, false alarm, and classification performance are provided, given the use of different sensors and associated wavebands. The false alarms are further mitigated through use of an adaptive, multi-layer classification scheme, leading to potential on-the-move application. The algorithm is shown to work in real time. The proposed system, including algorithm and hardware, is provided. Additional systems are proposed which attempt to complement the strengths and alleviate the weaknesses of the hardware and algorithm. Systems are proposed to mitigate saturation clutter signals and increase detection of saturated targets through the use of position, navigation, and timing sensors, acoustic sensors, and imaging sensors. Furthermore, systems are provided which increase target detection and provide increased functionality, improving the cost effectiveness of the system. The resulting algorithm is shown to enable detection of weapon fire targets, while minimizing false alarms, for real-world, fieldable applications. The work presented demonstrates the complexity of detection algorithm and system design for practical applications in complex environments and also emphasizes the complex interactions and considerations when designing a practical system, where system design is the intersection of algorithm performance and design, hardware performance and design, and size, weight, power, cost, and processing

    Dynamic Discriminant Analysis with Applications in Computational Surgery

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Mechanical Engineering. Advisor: Timothy Kowalewski. 1 computer file (PDF); x, 185 pages.Background: The field of computational surgery involves the use of new technologies to improve surgical safety and patient outcomes. Two open problems in this field include smart surgical tools for identifying tissues via backend sensing, and classifying surgical skill level using laparoscopic tool motion. Prior work in these fields has been impeded by the lack of a dynamic discriminant analysis technique capable of classifying data given systems with overwhelming similarity. Methods: Four new machine learning algorithms were developed (DLS, DPP, RELIEF-RBF, and Intent Vectors). These algorithms were then applied to the open problems within computational surgery. These algorithms are designed with the specific goal of finding regions of data with maximum discriminating information while ignoring regions of similarity or data scarcity. The results of these techniques are contrasted with current machine learning algorithms found in the literature. Results: For the tissue identification problem, results indicate that the proposed DLS algorithm provides better classification than existing methods. For the surgical skill evaluation problem, results indicate that the Intent Vectors approach provides equivalent or better classification accuracy when compared to prior art. Interpretation: The algorithms presented in this work provide a novel approach to the classification of time-series data for systems with overwhelming similarity by focusing on separability maximization while maintaining a tractable training routine and real-time classification for unseen data

    Taking the Temperature of Sports Arenas:Automatic Analysis of People

    Get PDF

    Optimization of Safety Control System for Civil Infrastructure Construction Projects

    Get PDF
    Labor-intensive repetitive activities are common in civil construction projects. Construction workers are prone to developing musculoskeletal disorders-related injuries while performing such tasks. The government regulatory agency provides minimum safety requirement guidelines to the construction industry that might not be sufficient to prevent accidents and injuries in a construction site. Also, the regulations do not provide insight into what can be done beyond the mandatory requirements to maximize safety and underscore the level of safety that can be attained and sustained on a site. The research addresses the aforestated problem in three stages: (i) identification of theoretical maximum attainable level of safety, safety frontier, (ii) identification of underlying system inefficiencies and operational inefficiencies, and (iii) identification of achievable level of safety, sustainable safety. The research proposes a novel approach to identify the safety frontier by kinetic analysis of the human body while performing labor-intensive repetitive tasks. The task is a combination of different unique actions, which further involve several movements. For identifying a safe working procedure, each movement frame needs to be analyzed to compute the joint stress. Multiple instances of repetitive tasks can then be analyzed to identify unique actions exerting minimum stress on joints. The safety frontier is a combination of such unique actions. For this, the research proposes to track the skeletal positional data of workers performing different repetitive tasks. Unique actions involved in all tasks were identified for each movement frame. For this, several machine learning techniques were implemented. Moreover, the inverse dynamics principle was used to compute the stress induced by essential joints. In addition to the inverse dynamics principle, several machine learning algorithms were implemented to predict lower back moments. Then, the safety frontier was computed, combining the unique actions exerting minimum stress to the joints. Furthermore, the research conducted a questionnaire survey with construction experts to identify the factors affecting system inefficiencies that are not under the control of the project management team and operational inefficiencies that are under control. Then, the sustainable safety was computed by adding system inefficiencies to the safety frontier and removing operational inefficiencies from observed safety. The research validated the applicability of the proposed methodology in a real construction site. The application of random forest classifier, one-vs-rest classifier, and support vector machine approach were validated with high accuracy (\u3e95%). Similarly, random forest regressor, lasso regression, gradient boosting evaluation, stacking regression, and deep neural network were explored to predict the lower back moment. Random forest regressor and deep neural network predicted the lower back moment with an explained variance of 0.582 and 0.700, respectively. The computed safety frontier and sustainable safety can potentially facilitate the construction sector to improve safety strategies by providing a higher safety benchmark for monitoring, including the ability to monitor postural safety in real-time. Moreover, different industrial sectors such as manufacturing and agriculture can implement the similar approach to identify safe working postures for any labor-intensive repetitive task

    Grasp-sensitive surfaces

    Get PDF
    Grasping objects with our hands allows us to skillfully move and manipulate them. Hand-held tools further extend our capabilities by adapting precision, power, and shape of our hands to the task at hand. Some of these tools, such as mobile phones or computer mice, already incorporate information processing capabilities. Many other tools may be augmented with small, energy-efficient digital sensors and processors. This allows for graspable objects to learn about the user grasping them - and supporting the user's goals. For example, the way we grasp a mobile phone might indicate whether we want to take a photo or call a friend with it - and thus serve as a shortcut to that action. A power drill might sense whether the user is grasping it firmly enough and refuse to turn on if this is not the case. And a computer mouse could distinguish between intentional and unintentional movement and ignore the latter. This dissertation gives an overview of grasp sensing for human-computer interaction, focusing on technologies for building grasp-sensitive surfaces and challenges in designing grasp-sensitive user interfaces. It comprises three major contributions: a comprehensive review of existing research on human grasping and grasp sensing, a detailed description of three novel prototyping tools for grasp-sensitive surfaces, and a framework for analyzing and designing grasp interaction: For nearly a century, scientists have analyzed human grasping. My literature review gives an overview of definitions, classifications, and models of human grasping. A small number of studies have investigated grasping in everyday situations. They found a much greater diversity of grasps than described by existing taxonomies. This diversity makes it difficult to directly associate certain grasps with users' goals. In order to structure related work and own research, I formalize a generic workflow for grasp sensing. It comprises *capturing* of sensor values, *identifying* the associated grasp, and *interpreting* the meaning of the grasp. A comprehensive overview of related work shows that implementation of grasp-sensitive surfaces is still hard, researchers often are not aware of related work from other disciplines, and intuitive grasp interaction has not yet received much attention. In order to address the first issue, I developed three novel sensor technologies designed for grasp-sensitive surfaces. These mitigate one or more limitations of traditional sensing techniques: **HandSense** uses four strategically positioned capacitive sensors for detecting and classifying grasp patterns on mobile phones. The use of custom-built high-resolution sensors allows detecting proximity and avoids the need to cover the whole device surface with sensors. User tests showed a recognition rate of 81%, comparable to that of a system with 72 binary sensors. **FlyEye** uses optical fiber bundles connected to a camera for detecting touch and proximity on arbitrarily shaped surfaces. It allows rapid prototyping of touch- and grasp-sensitive objects and requires only very limited electronics knowledge. For FlyEye I developed a *relative calibration* algorithm that allows determining the locations of groups of sensors whose arrangement is not known. **TDRtouch** extends Time Domain Reflectometry (TDR), a technique traditionally used for inspecting cable faults, for touch and grasp sensing. TDRtouch is able to locate touches along a wire, allowing designers to rapidly prototype and implement modular, extremely thin, and flexible grasp-sensitive surfaces. I summarize how these technologies cater to different requirements and significantly expand the design space for grasp-sensitive objects. Furthermore, I discuss challenges for making sense of raw grasp information and categorize interactions. Traditional application scenarios for grasp sensing use only the grasp sensor's data, and only for mode-switching. I argue that data from grasp sensors is part of the general usage context and should be only used in combination with other context information. For analyzing and discussing the possible meanings of grasp types, I created the GRASP model. It describes five categories of influencing factors that determine how we grasp an object: *Goal* -- what we want to do with the object, *Relationship* -- what we know and feel about the object we want to grasp, *Anatomy* -- hand shape and learned movement patterns, *Setting* -- surrounding and environmental conditions, and *Properties* -- texture, shape, weight, and other intrinsics of the object I conclude the dissertation with a discussion of upcoming challenges in grasp sensing and grasp interaction, and provide suggestions for implementing robust and usable grasp interaction.Die FĂ€higkeit, GegenstĂ€nde mit unseren HĂ€nden zu greifen, erlaubt uns, diese vielfĂ€ltig zu manipulieren. Werkzeuge erweitern unsere FĂ€higkeiten noch, indem sie Genauigkeit, Kraft und Form unserer HĂ€nde an die Aufgabe anpassen. Digitale Werkzeuge, beispielsweise Mobiltelefone oder ComputermĂ€use, erlauben uns auch, die FĂ€higkeiten unseres Gehirns und unserer Sinnesorgane zu erweitern. Diese GerĂ€te verfĂŒgen bereits ĂŒber Sensoren und Recheneinheiten. Aber auch viele andere Werkzeuge und Objekte lassen sich mit winzigen, effizienten Sensoren und Recheneinheiten erweitern. Dies erlaubt greifbaren Objekten, mehr ĂŒber den Benutzer zu erfahren, der sie greift - und ermöglicht es, ihn bei der Erreichung seines Ziels zu unterstĂŒtzen. Zum Beispiel könnte die Art und Weise, in der wir ein Mobiltelefon halten, verraten, ob wir ein Foto aufnehmen oder einen Freund anrufen wollen - und damit als Shortcut fĂŒr diese Aktionen dienen. Eine Bohrmaschine könnte erkennen, ob der Benutzer sie auch wirklich sicher hĂ€lt und den Dienst verweigern, falls dem nicht so ist. Und eine Computermaus könnte zwischen absichtlichen und unabsichtlichen Mausbewegungen unterscheiden und letztere ignorieren. Diese Dissertation gibt einen Überblick ĂŒber Grifferkennung (*grasp sensing*) fĂŒr die Mensch-Maschine-Interaktion, mit einem Fokus auf Technologien zur Implementierung griffempfindlicher OberflĂ€chen und auf Herausforderungen beim Design griffempfindlicher Benutzerschnittstellen. Sie umfasst drei primĂ€re BeitrĂ€ge zum wissenschaftlichen Forschungsstand: einen umfassenden Überblick ĂŒber die bisherige Forschung zu menschlichem Greifen und Grifferkennung, eine detaillierte Beschreibung dreier neuer Prototyping-Werkzeuge fĂŒr griffempfindliche OberflĂ€chen und ein Framework fĂŒr Analyse und Design von griff-basierter Interaktion (*grasp interaction*). Seit nahezu einem Jahrhundert erforschen Wissenschaftler menschliches Greifen. Mein Überblick ĂŒber den Forschungsstand beschreibt Definitionen, Klassifikationen und Modelle menschlichen Greifens. In einigen wenigen Studien wurde bisher Greifen in alltĂ€glichen Situationen untersucht. Diese fanden eine deutlich grĂ¶ĂŸere DiversitĂ€t in den Griffmuster als in existierenden Taxonomien beschreibbar. Diese DiversitĂ€t erschwert es, bestimmten Griffmustern eine Absicht des Benutzers zuzuordnen. Um verwandte Arbeiten und eigene Forschungsergebnisse zu strukturieren, formalisiere ich einen allgemeinen Ablauf der Grifferkennung. Dieser besteht aus dem *Erfassen* von Sensorwerten, der *Identifizierung* der damit verknĂŒpften Griffe und der *Interpretation* der Bedeutung des Griffes. In einem umfassenden Überblick ĂŒber verwandte Arbeiten zeige ich, dass die Implementierung von griffempfindlichen OberflĂ€chen immer noch ein herausforderndes Problem ist, dass Forscher regelmĂ€ĂŸig keine Ahnung von verwandten Arbeiten in benachbarten Forschungsfeldern haben, und dass intuitive Griffinteraktion bislang wenig Aufmerksamkeit erhalten hat. Um das erstgenannte Problem zu lösen, habe ich drei neuartige Sensortechniken fĂŒr griffempfindliche OberflĂ€chen entwickelt. Diese mindern jeweils eine oder mehrere SchwĂ€chen traditioneller Sensortechniken: **HandSense** verwendet vier strategisch positionierte kapazitive Sensoren um Griffmuster zu erkennen. Durch die Verwendung von selbst entwickelten, hochauflösenden Sensoren ist es möglich, schon die AnnĂ€herung an das Objekt zu erkennen. Außerdem muss nicht die komplette OberflĂ€che des Objekts mit Sensoren bedeckt werden. Benutzertests ergaben eine Erkennungsrate, die vergleichbar mit einem System mit 72 binĂ€ren Sensoren ist. **FlyEye** verwendet LichtwellenleiterbĂŒndel, die an eine Kamera angeschlossen werden, um AnnĂ€herung und BerĂŒhrung auf beliebig geformten OberflĂ€chen zu erkennen. Es ermöglicht auch Designern mit begrenzter Elektronikerfahrung das Rapid Prototyping von berĂŒhrungs- und griffempfindlichen Objekten. FĂŒr FlyEye entwickelte ich einen *relative-calibration*-Algorithmus, der verwendet werden kann um Gruppen von Sensoren, deren Anordnung unbekannt ist, semi-automatisch anzuordnen. **TDRtouch** erweitert Time Domain Reflectometry (TDR), eine Technik die ĂŒblicherweise zur Analyse von KabelbeschĂ€digungen eingesetzt wird. TDRtouch erlaubt es, BerĂŒhrungen entlang eines Drahtes zu lokalisieren. Dies ermöglicht es, schnell modulare, extrem dĂŒnne und flexible griffempfindliche OberflĂ€chen zu entwickeln. Ich beschreibe, wie diese Techniken verschiedene Anforderungen erfĂŒllen und den *design space* fĂŒr griffempfindliche Objekte deutlich erweitern. Desweiteren bespreche ich die Herausforderungen beim Verstehen von Griffinformationen und stelle eine Einteilung von Interaktionsmöglichkeiten vor. Bisherige Anwendungsbeispiele fĂŒr die Grifferkennung nutzen nur Daten der Griffsensoren und beschrĂ€nken sich auf Moduswechsel. Ich argumentiere, dass diese Sensordaten Teil des allgemeinen Benutzungskontexts sind und nur in Kombination mit anderer Kontextinformation verwendet werden sollten. Um die möglichen Bedeutungen von Griffarten analysieren und diskutieren zu können, entwickelte ich das GRASP-Modell. Dieses beschreibt fĂŒnf Kategorien von Einflussfaktoren, die bestimmen wie wir ein Objekt greifen: *Goal* -- das Ziel, das wir mit dem Griff erreichen wollen, *Relationship* -- das VerhĂ€ltnis zum Objekt, *Anatomy* -- Handform und Bewegungsmuster, *Setting* -- Umgebungsfaktoren und *Properties* -- Eigenschaften des Objekts, wie OberflĂ€chenbeschaffenheit, Form oder Gewicht. Ich schließe mit einer Besprechung neuer Herausforderungen bei der Grifferkennung und Griffinteraktion und mache VorschlĂ€ge zur Entwicklung von zuverlĂ€ssiger und benutzbarer Griffinteraktion

    Development of Mining Sector Applications for Emerging Remote Sensing and Deep Learning Technologies

    Get PDF
    This thesis uses neural networks and deep learning to address practical, real-world problems in the mining sector. The main focus is on developing novel applications in the area of object detection from remotely sensed data. This area has many potential mining applications and is an important part of moving towards data driven strategic decision making across the mining sector. The scientific contributions of this research are twofold; firstly, each of the three case studies demonstrate new applications which couple remote sensing and neural network based technologies for improved data driven decision making. Secondly, the thesis presents a framework to guide implementation of these technologies in the mining sector, providing a guide for researchers and professionals undertaking further studies of this type. The first case study builds a fully connected neural network method to locate supporting rock bolts from 3D laser scan data. This method combines input features from the remote sensing and mobile robotics research communities, generating accuracy scores up to 22% higher than those found using either feature set in isolation. The neural network approach also is compared to the widely used random forest classifier and is shown to outperform this classifier on the test datasets. Additionally, the algorithms’ performance is enhanced by adding a confusion class to the training data and by grouping the output predictions using density based spatial clustering. The method is tested on two datasets, gathered using different laser scanners, in different types of underground mines which have different rock bolting patterns. In both cases the method is found to be highly capable of detecting the rock bolts with recall scores of 0.87-0.96. The second case study investigates modern deep learning for LiDAR data. Here, multiple transfer learning strategies and LiDAR data representations are examined for the task of identifying historic mining remains. A transfer learning approach based on a Lunar crater detection model is used, due to the task similarities between both the underlying data structures and the geometries of the objects to be detected. The relationship between dataset resolution and detection accuracy is also examined, with the results showing that the approach is capable of detecting pits and shafts to a high degree of accuracy with precision and recall scores between 0.80-0.92, provided the input data is of sufficient quality and resolution. Alongside resolution, different LiDAR data representations are explored, showing that the precision-recall balance varies depending on the input LiDAR data representation. The third case study creates a deep convolutional neural network model to detect artisanal scale mining from multispectral satellite data. This model is trained from initialisation without transfer learning and demonstrates that accurate multispectral models can be built from a smaller training dataset when appropriate design and data augmentation strategies are adopted. Alongside the deep learning model, novel mosaicing algorithms are developed both to improve cloud cover penetration and to decrease noise in the final prediction maps. When applied to the study area, the results from this model provide valuable information about the expansion, migration and forest encroachment of artisanal scale mining in southwestern Ghana over the last four years. Finally, this thesis presents an implementation framework for these neural network based object detection models, to generalise the findings from this research to new mining sector deep learning tasks. This framework can be used to identify applications which would benefit from neural network approaches; to build the models; and to apply these algorithms in a real world environment. The case study chapters confirm that the neural network models are capable of interpreting remotely sensed data to a high degree of accuracy on real world mining problems, while the framework guides the development of new models to solve a wide range of related challenges

    A non-holonomic, highly human-in-the-loop compatible, assistive mobile robotic platform guidance navigation and control strategy

    Get PDF
    The provision of assistive mobile robotics for empowering and providing independence to the infirm, disabled and elderly in society has been the subject of much research. The issue of providing navigation and control assistance to users, enabling them to drive their powered wheelchairs effectively, can be complex and wide-ranging; some users fatigue quickly and can find that they are unable to operate the controls safely, others may have brain injury re-sulting in periodic hand tremors, quadriplegics may use a straw-like switch in their mouth to provide a digital control signal. Advances in autonomous robotics have led to the development of smart wheelchair systems which have attempted to address these issues; however the autonomous approach has, ac-cording to research, not been successful; users reporting that they want to be active drivers and not passengers. Recent methodologies have been to use collaborative or shared control which aims to predict or anticipate the need for the system to take over control when some pre-decided threshold has been met, yet these approaches still take away control from the us-er. This removal of human supervision and control by an autonomous system makes the re-sponsibility for accidents seriously problematic. This thesis introduces a new human-in-the-loop control structure with real-time assistive lev-els. One of these levels offers improved dynamic modelling and three of these levels offer unique and novel real-time solutions for: collision avoidance, localisation and waypoint iden-tification, and assistive trajectory generation. This architecture and these assistive functions always allow the user to remain fully in control of any motion of the powered wheelchair, shown in a series of experiments

    Connectivity Analysis of Brain States and Applications in Brain-Computer Interfaces

    Get PDF
    Human brain is organized by a large number of functionally correlated but spatially distributed cortical neurons. Cognitive processes are usually associated with dynamic interactions among multiple brain regions. Therefore, the understanding of brain functions requires the inves- tigation of the brain interaction patterns. This thesis contains two main aspects. The first aspect focuses on the neural basis for cognitive processes through the use of brain connectivity analysis. The second part targets on assessing brain connectivity patterns in realistic scenarios, e.g., in-car BCI and stroke patients. In the first part, we explored the neural correlates of error-related brain activity. We recorded scalp electroencephalogram (EEG) from 15 healthy subjects while monitoring the movement of a cursor on a computer screen, yielding particular brain connectivity patterns after monitoring external errors. This supports the presence of common role of medial frontal cortex in coordinating cross-regional activity during brain error processes, independent of their causes, either self-generated or external events. This part also included the investigation of the connectivity during left/right hand motor imagery, including 9 healthy subjects, which demonstrated particular intrahemispheric and interhemispheric information flows in two motor imagery tasks, i.e., the ΃ rhythm is highly modulated in intrahemispheric, whereas ÎÂČ and γ are modulated in interhemispheric interactions. This part also explored the neural correlates of reaction time during driving. An experiment with 15 healthy subjects in car simulator was designed, in which they needed to perform lane change to avoid collision with obstacles. Significant neural modulations were found in ERP (event-related potential), PSD (power spectral density), and frontoparietal network, which seems to reflect the underlying information transfer from sensory representation in the parietal cortex to behavioral adjusting in the frontal cortex. In the second part, we first explored the feasibility of using BCI as driving assistant system, in which visual stimuli were presented to evoke error/correct related potentials, and were classified to infer driverĂąs preferred turning direction. The system was validated in a car simulator with 22 subjects, and 7 joined online tests. The system was also tested in real car, yielding similar brain patterns and comparable classification accuracy. The second part also carried out the brain connectivity analysis in stroke patients.We performed exploratory study to correlate the recovery effects of BCI therapy, through the quantification of connectivity between healthy and lesioned hemispheres. The results indicate the benefits of BCI therapy for stroke patients, i.e., brain connectivity are more similar as healthy patterns, increased (decreased) flow from the damaged (undamaged) to the undamaged (damaged) cortex. Briefly, this thesis presents exploratory studies of brain connectivity analysis, investigating the neural basis of cognitive processes, and its contributions in the decoding phase. In particular, such analysis is not limited to laboratory researches, but also extended to clinical trials and driving scenarios, further supporting the findings observed in the ideal condition
    corecore