114 research outputs found

    Molecular Docking With Haptic Guidance and Path Planning

    Get PDF
    Molecular docking drives many important biological processes including immune system recognition and cellular signalling. Molecular docking occurs when molecules interact and form complexes. Predicting how specific molecules dock with each other using computational methods has several applications including understanding diseases and virtual drug design. The goal of molecular docking prediction is to find the lowest energy ligand states. The lower the energy state, the more probable the state is docked and biologically feasible. Existing automated computational methods can be time intensive, especially when using direct molecular dynamic simulation. One way to reduce this computational cost is to use more coarse-grained models that approximate molecular docking. Coarse-grained molecular docking prediction is generally performed first by sampling ligand states using a rigid body model or a partial flexibility model to reduce computation, then by screening the states. The ligand states are screened using a scoring function, usually a potential energy function for interactions between the atoms in each molecule. Ligand state search algorithms still have a significant computational cost if a large portion of the state space is to be explored. Instead of an automated ligand state search method, a human operator can explore the state space instead. Haptic force feedback devices providing guidance based off the energy function can aid the human operator. Haptic-guidance has been used for immersive semi-automatic and manual molecular docking on a single operator scale. A large amount of ligand state space can be explored with many human operators in a crowdsourced effort. Players in an interactive crowdsourced protein folding puzzle game have aided in finding protein folding prediction solutions, but without haptic feedback. Interactive crowdsourced methods for molecular docking prediction is not well-explored, although non-interactive crowdsourced systems such as Folding@home can be adapted for molecular docking. This thesis presents a molecular docking game that produces low potential energy ligand states and motion paths with crowdsource scale potential. In an exploratory user study, participants were assigned four different types of devices with varying levels of haptic guidance to search for a potentially docked ligand state. The results demonstrate some effect on the type of device and haptic guidance seen in the study. However, differences are minimal thus potentially enabling the use of commonly available input devices in a crowdsourced setting

    Probabilistic Roadmaps for Virtual Camera Pathing with Cinematographic Principles

    Get PDF
    As technology use increases in the world and inundates everyday life, the visual aspect of technology or computer graphics becomes increasingly important. This thesis presents a system for the automatic generation of virtual camera paths for fly-throughs of a digital scene. The sample scene used in this work is an underwater setting featuring a shipwreck model with other virtual underwater elements such as rocks, bubbles and caustics. The digital shipwreck model was reconstructed from an actual World War II shipwreck, resting off the coast of Malta. Video and sonar scans from an autonomous underwater vehicle were used in a photogrammetry pipeline to create the model. This thesis presents an algorithm to automatically generate virtual camera paths using a robotics motion planning algorithm, specifically the probabilistic roadmap. This algorithm uses a rapidly-exploring random tree to quickly cover a space and generate small maps with good coverage. For this work, the camera pitch and height along a specified path were automatically generated using cinematographic and geometric principles. These principles were used to evaluate potential viewpoints and influence whether or not a view is used in the final path. A computational evaluation of ‘the rule of thirds’ and evaluation of the model normals relative to the camera viewpoint are used to represent cinematography and geometry principles. In addition to the system that automatically generates virtual camera paths, a user study is presented which evaluates ten different videos produced via camera paths with this system. The videos were created using different viewpoint evaluation methods and different path generation characteristics. The user study indicates that users prefer paths generated by our system over flat and randomly generated paths. Specifically, users prefer paths generated using the computational evaluation of the rule of thirds and paths that show the wreck from a large variety of angles but without too much camera undulation

    Advanced Robot Path Planning (RRT)

    Get PDF
    Tato diplomovĂĄ prĂĄce prĂĄce se zabĂœvĂĄ plĂĄnovĂĄnĂ­m cesty vĆĄesměrovĂ©ho mobilnĂ­ho robotu pomocĂ­ algoritmu RRT (Rapidly-exploring Random Tree – Rychle rostoucĂ­ nĂĄhodnĂœ strom). TeoretickĂĄ část popisuje zĂĄkladnĂ­ algoritmy plĂĄnovĂĄnĂ­ cesty a prezentuje bliĆŸĆĄĂ­ pohled na RRT a jeho potenciĂĄl. PraktickĂĄ část prĂĄce ƙeĆĄĂ­ nĂĄvrh a tvorbu v zĂĄsadě multiplatformnĂ­ C++ aplikace v prostƙedĂ­ Windows 7 za pouĆŸitĂ­ aplikačnĂ­ho frameworku Qt 4.8.0, kterĂĄ implementuje pokročilĂ© RRT algoritmy s parametrizovatelnĂœm ƙeĆĄičem a speciĂĄlnĂ­m dĂĄvkovĂœm reĆŸimem. Tento mĂłd slouĆŸĂ­ k testovĂĄnĂ­ efektivnosti nastavenĂ­ ƙeĆĄiče pro danĂ© Ășlohy a je zaloĆŸen na post-processingu a vizualizaci vĂœstupu měƙenĂœch Ășloh pomocĂ­ jazyka Python. VypočtenĂ© cesty mohou bĂœt vylepĆĄeny pomocĂ­ zkracovacĂ­ch algoritmĆŻ a vĂœslednĂĄ trajektorie odeslĂĄna do pohonĆŻ Maxon Compact Drive vĆĄesměrovĂ© mobilnĂ­ platformy pomocĂ­ CANopen. Aplikace klade dĆŻraz na modernĂ­ grafickĂ© uĆŸivatelskĂ© rozhranĂ­ se spolehlivĂœm a vĂœkonnĂœm 2D grafickĂœm engine.This master's thesis deals with path planning of omnidirectional mobile robot using the RRT algorithm (Rapidly-exploring Random Tree). Theoretical part describes basic algorithms of path planning and presents closer view on RRT and its potential. Practical part deals with designing and creation of essentially multiplatform C++ application in Windows 7 environment with Qt 4.8.0 application framework, which implements advanced RRT algorithms with user-programmable solver and special batch mode. This mode is used for testing the effectiveness of solver on given tasks and it is based on postprocessing and visualization of measurement tasks output by Python language. Computed paths can be enhanced by shortening algorithms and result trajectory sent to Maxon Compact Drives of omnidirectional platform via the CANopen. Application puts emphasis on modern GUI with reliable and powerful 2D graphics engine.

    Asservissement d'un bras robotique d'assistance à l'aide d'un systÚme de stéréo vision artificielle et d'un suiveur de regard

    Get PDF
    RÉSUMÉ L’utilisation rĂ©cente de bras robotiques sĂ©riels dans le but d’assister des personnes ayant des problĂšmes de motricitĂ©s sĂ©vĂšres des membres supĂ©rieurs soulĂšve une nouvelle problĂ©matique au niveau de l’interaction humain-machine (IHM). En effet, jusqu’à maintenant le « joystick » est utilisĂ© pour contrĂŽler un bras robotiques d’assistance (BRA). Pour les utilisateurs ayant des problĂšmes de motricitĂ© sĂ©vĂšres des membres supĂ©rieurs, ce type de contrĂŽle n’est pas une option adĂ©quate. Ce mĂ©moire prĂ©sente une autre option afin de pallier cette problĂ©matique. La solution prĂ©sentĂ©e est composĂ©e de deux composantes principales. La premiĂšre est une camĂ©ra de stĂ©rĂ©o vision utilisĂ©e afin d’informer le BRA des objets prĂ©sents dans son espace de travail. Il est important qu’un BRA soit conscient de ce qui est prĂ©sent dans son espace de travail puisqu’il doit ĂȘtre en mesure d’éviter les objets non voulus lorsqu’il parcourt un trajet afin d’atteindre l’objet d’intĂ©rĂȘt pour l'utilisateur. La deuxiĂšme composante est l’IHM qui est dans ce travail reprĂ©sentĂ©e par un suiveur de regard Ă  bas coĂ»t. Effectivement, le suiveur de regard a Ă©tĂ© choisi puisque, gĂ©nĂ©ralement, les yeux d’un patient ayant des problĂšmes sĂ©vĂšres de motricitĂ©s au niveau des membres supĂ©rieurs restent toujours fonctionnels. Le suiveur de regard est gĂ©nĂ©ralement utilisĂ© avec un Ă©cran pour des applications en 2D ce qui n’est pas intuitif pour l’utilisateur puisque celui-ci doit constamment regarder une reproduction 2D de la scĂšne sur un Ă©cran. En d’autres mots, il faut rendre le suiveur de regard viable dans un environnement 3D sans l’utilisation d’un Ă©cran, ce qui a Ă©tĂ© fait dans ce mĂ©moire. Un systĂšme de stĂ©rĂ©o vision, un suiveur de regard ainsi qu’un BRA sont les composantes principales du systĂšme prĂ©sentĂ© qui se nomme PoGARA qui est une abrĂ©viation pour Point of Gaze Assistive Robotic Arm. En utilisant PoGARA, l’utilisateur a Ă©tĂ© capable d’atteindre et de prendre un objet pour 80% des essais avec un temps moyen de 13.7 secondes sans obstacles, 15.3 secondes avec un obstacle et 16.3 secondes avec deux obstacles.----------ABSTRACT The recent increased interest in the use of serial robots to assist individuals with severe upper limb disability brought-up an important issue which is the design of the right human computer interaction (HCI). Indeed, so far, the control of assistive robotic arms (ARA) is often done using a joystick. For the users who have a severe upper limb disability, this type of control is not a suitable option. In this master’s thesis, a novel solution is presented to overcome this issue. The developed solution is composed of two main components. The first one is a stereo vision system which is used to inform the ARA of the content of its workspace. It is important for the ARA to be aware of what is present in its workspace since it needs to avoid the unwanted objects while it is on its way to grasp the object of interest. The second component is the actual HCI, where an eye tracker is used. Indeed, the eye tracker was chosen since the eyes, often, remain functional even for patients with severe upper limb disability. However, usually, low-cost, commercially available eye trackers are mainly designed for 2D applications with a screen which is not intuitive for the user since he needs to constantly watch a reproduction of the scene on a 2D screen instead of the 3D scene itself. In other words, the eye tracker needs to be made viable for usage in a 3D environment without the use of a screen. This was achieved in this master thesis work. A stereo vision system, an eye tracker as well as an ARA are the main components of the developed system named PoGARA which is short for Point of Gaze Assistive Robotic Arm. Using PoGARA, during the tests, the user was able to reach and grasp an object for 80% of the trials with an average time of 13.7 seconds without obstacles, 15.3 seconds with one obstacles and 16.3 seconds with two obstacles

    A survey on automated detection and classification of acute leukemia and WBCs in microscopic blood cells

    Full text link
    Leukemia (blood cancer) is an unusual spread of White Blood Cells or Leukocytes (WBCs) in the bone marrow and blood. Pathologists can diagnose leukemia by looking at a person's blood sample under a microscope. They identify and categorize leukemia by counting various blood cells and morphological features. This technique is time-consuming for the prediction of leukemia. The pathologist's professional skills and experiences may be affecting this procedure, too. In computer vision, traditional machine learning and deep learning techniques are practical roadmaps that increase the accuracy and speed in diagnosing and classifying medical images such as microscopic blood cells. This paper provides a comprehensive analysis of the detection and classification of acute leukemia and WBCs in the microscopic blood cells. First, we have divided the previous works into six categories based on the output of the models. Then, we describe various steps of detection and classification of acute leukemia and WBCs, including Data Augmentation, Preprocessing, Segmentation, Feature Extraction, Feature Selection (Reduction), Classification, and focus on classification step in the methods. Finally, we divide automated detection and classification of acute leukemia and WBCs into three categories, including traditional, Deep Neural Network (DNN), and mixture (traditional and DNN) methods based on the type of classifier in the classification step and analyze them. The results of this study show that in the diagnosis and classification of acute leukemia and WBCs, the Support Vector Machine (SVM) classifier in traditional machine learning models and Convolutional Neural Network (CNN) classifier in deep learning models have widely employed. The performance metrics of the models that use these classifiers compared to the others model are higher

    Intelligent Navigation Service Robot Working in a Flexible and Dynamic Environment

    Get PDF
    Numerous sensor fusion techniques have been reported in the literature for a number of robotics applications. These techniques involved the use of different sensors in different configurations. However, in the case of food driving, the possibility of the implementation has been overlooked. In restaurants and food delivery spots, enhancing the food transfer to the correct table is neatly required, without running into other robots or diners or toppling over. In this project, a particular algorithm module has been proposed and implemented to enhance the robot driving methodology and maximize robot functionality, accuracy, and the food transfer experience. The emphasis has been on enhancing movement accuracy to reach the targeted table from the start to the end. Four major elements have been designed to complete this project, including mechanical, electrical, electronics, and programming. Since the floor condition greatly affecting the wheels and turning angle selection, the movement accuracy was improved during the project. The robot was successfully able to receive the command from the restaurant and go to deliver the food to the customers\u27 tables, considering any obstacles on the way to avoid. The robot has equipped with two trays to mount the food with well-configured voices to welcome and greet the customer. The performance has been evaluated and undertaken using a routine robot movement tests. As part of this study, the designed service wheeled robot required to be with a high-performance real-time processor. As long as the processor was adequate, the experimental results showed a highly effective search robot methodology. Having concluded from the study that a minimum number of sensors are needed if they are placed appropriately and used effectively on a robot\u27s body, as navigation could be performed by using a small set of sensors. The Arduino Due has been used to provide a real-time operating system. It has provided a very successful data processing and transfer throughout any regular operation. Furthermore, an easy-to-use application has been developed to improve the user experience, so that the operator can interact directly with the robot via a special setting screen. It is possible, using this feature, to modify advanced settings such as voice commands or IP address without having to return back to the code

    Simulateur tutoriel intelligent pour les opérations robotisées application au bras canadien sur la station spatiale internationale

    Get PDF
    Cette thÚse a pour objectif de développer un simulateur tutoriel intelligent pour l'apprentissage de manipulations robotisées, applicable au bras robot canadien sur la station spatiale internationale. Le simulateur appelé Roman Tutor est une preuve de concept de simulateur d'apprentissage autonome et continu pour des manipulations robotisées complexes. Un tel concept est notamment pertinent pour les futures missions spatiales sur Mars ou sur la Lune, et ce en dépit de l'inadéquation du bras canadien pour de telles missions en raison de sa trop grande complexité. Le fait de démontrer la possibilité de conception d'un simulateur capable, dans une certaine mesure, de donner des rétroactions similaires à celles d'un enseignant humain, pourrait inspirer de nouvelles idées pour des concepts similaires, applicables à des robots plus simples, qui seraient utilisés dans les prochaines missions spatiales. Afin de réaliser ce prototype, il est question de développer et d'intégrer trois composantes originales : premiÚrement, un planificateur de trajectoires pour des environnements dynamiques présentant des contraintes dures et flexibles ; deuxiÚmement, un générateur automatique de démonstrations de tùches, lequel fait appel au planificateur de trajectoires pour trouver une trajectoire solution à une tùche de déplacement du bras robot et à des techniques de planification des animations pour filmer la solution obtenue ; et troisiÚmement, un modÚle pédagogique implémentant des stratégies d'intervention pour donner de l'aide à un opérateur manipulant le SSRMS. L'assistance apportée à un opérateur sur Roman Tutor fait appel d'une part à des démonstrations de tùches générées par le générateur automatique de démonstrations, et d'autre part au planificateur de trajectoires pour suivre la progression de l'opérateur sur sa tùche, lui fournir de l'aide et le corriger au besoin

    Appearance and Geometry Assisted Visual Navigation in Urban Areas

    Get PDF
    Navigation is a fundamental task for mobile robots in applications such as exploration, surveillance, and search and rescue. The task involves solving the simultaneous localization and mapping (SLAM) problem, where a map of the environment is constructed. In order for this map to be useful for a given application, a suitable scene representation needs to be defined that allows spatial information sharing between robots and also between humans and robots. High-level scene representations have the benefit of being more robust and having higher exchangeability for interpretation. With the aim of higher level scene representation, in this work we explore high-level landmarks and their usage using geometric and appearance information to assist mobile robot navigation in urban areas. In visual SLAM, image registration is a key problem. While feature-based methods such as scale-invariant feature transform (SIFT) matching are popular, they do not utilize appearance information as a whole and will suffer from low-resolution images. We study appearance-based methods and propose a scale-space integrated Lucas-Kanade’s method that can estimate geometric transformations and also take into account image appearance with different resolutions. We compare our method against state-of-the-art methods and show that our method can register images efficiently with high accuracy. In urban areas, planar building facades (PBFs) are basic components of the quasirectilinear environment. Hence, segmentation and mapping of PBFs can increase a robot’s abilities of scene understanding and localization. We propose a vision-based PBF segmentation and mapping technique that combines both appearance and geometric constraints to segment out planar regions. Then, geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints are used in an optimization process to improve the mapping of PBFs. A major issue in monocular visual SLAM is scale drift. While depth sensors, such as lidar, are free from scale drift, this type of sensors are usually more expensive compared to cameras. To enable low-cost mobile robots equipped with monocular cameras to obtain accurate position information, we use a 2D lidar map to rectify imprecise visual SLAM results using planar structures. We propose a two-step optimization approach assisted by a penalty function to improve on low-quality local minima results. Robot paths for navigation can be either automatically generated by a motion planning algorithm or provided by a human. In both cases, a scene representation of the environment, i.e., a map, is useful to specify meaningful tasks for the robot. However, SLAM results usually produce a sparse scene representation that consists of low-level landmarks, such as point clouds, which are neither convenient nor intuitive to use for task specification. We present a system that allows users to program mobile robots using high-level landmarks from appearance data

    Widening the view angle of auto-multiscopic display, denoising low brightness light field data and 3D reconstruction with delicate details

    Get PDF
    This doctoral thesis will present the results of my work into widening the viewing angle of the auto-multiscopic display, denoising light filed data the enhancement of captured light filed data captured in low light circumstance, and the attempts on reconstructing the subject surface with delicate details from microscopy image sets. The automultiscopic displays carefully control the distribution of emitted light over space, direction (angle) and time so that even a static image displayed can encode parallax across viewing directions (light field). This allows simultaneous observation by multiple viewers, each perceiving 3D from their own (correct) perspective. Currently, the illusion can only be effectively maintained over a narrow range of viewing angles. We propose and analyze a simple solution to widen the range of viewing angles for automultiscopic displays that use parallax barriers. We insert a refractive medium, with a high refractive index, between the display and parallax barriers. The inserted medium warps the exitant lightfield in a way that increases the potential viewing angle. We analyze the consequences of this warp and build a prototype with a 93% increase in the effective viewing angle. Additionally, we developed an integral images synthesis method that can address the refraction introduced by the inserted medium efficiently without the use of ray tracing. Capturing light field image with a short exposure time is preferable for eliminating the motion blur but it also leads to low brightness in a low light environment, which results in a low signal noise ratio. Most light field denoising methods apply regular 2D image denoising method to the sub-aperture images of a 4D light field directly, but it is not suitable for focused light field data whose sub-aperture image resolution is too low to be applied regular denoising methods. Therefore, we propose a deep learning denoising method based on micro lens images of focused light field to denoise the depth map and the original micro lens image set simultaneously, and achieved high quality total focused images from the low focused light field data. In areas like digital museum, remote researching, 3D reconstruction with delicate details of subjects is desired and technology like 3D reconstruction based on macro photography has been used successfully for various purposes. We intend to push it further by using microscope rather than macro lens, which is supposed to be able to capture the microscopy level details of the subject. We design and implement a scanning method which is able to capture microscopy image set from a curve surface based on robotic arm, and the 3D reconstruction method suitable for the microscopy image set

    Augmented Image-Guidance for Transcatheter Aortic Valve Implantation

    Get PDF
    The introduction of transcatheter aortic valve implantation (TAVI), an innovative stent-based technique for delivery of a bioprosthetic valve, has resulted in a paradigm shift in treatment options for elderly patients with aortic stenosis. While there have been major advancements in valve design and access routes, TAVI still relies largely on single-plane fluoroscopy for intraoperative navigation and guidance, which provides only gross imaging of anatomical structures. Inadequate imaging leading to suboptimal valve positioning contributes to many of the early complications experienced by TAVI patients, including valve embolism, coronary ostia obstruction, paravalvular leak, heart block, and secondary nephrotoxicity from contrast use. A potential method of providing improved image-guidance for TAVI is to combine the information derived from intra-operative fluoroscopy and TEE with pre-operative CT data. This would allow the 3D anatomy of the aortic root to be visualized along with real-time information about valve and prosthesis motion. The combined information can be visualized as a `merged\u27 image where the different imaging modalities are overlaid upon each other, or as an `augmented\u27 image, where the location of key target features identified on one image are displayed on a different imaging modality. This research develops image registration techniques to bring fluoroscopy, TEE, and CT models into a common coordinate frame with an image processing workflow that is compatible with the TAVI procedure. The techniques are designed to be fast enough to allow for real-time image fusion and visualization during the procedure, with an intra-procedural set-up requiring only a few minutes. TEE to fluoroscopy registration was achieved using a single-perspective TEE probe pose estimation technique. The alignment of CT and TEE images was achieved using custom-designed algorithms to extract aortic root contours from XPlane TEE images, and matching the shape of these contours to a CT-derived surface model. Registration accuracy was assessed on porcine and human images by identifying targets (such as guidewires or coronary ostia) on the different imaging modalities and measuring the correspondence of these targets after registration. The merged images demonstrated good visual alignment of aortic root structures, and quantitative assessment measured an accuracy of less than 1.5mm error for TEE-fluoroscopy registration and less than 6mm error for CT-TEE registration. These results suggest that the image processing techniques presented have potential for development into a clinical tool to guide TAVI. Such a tool could potentially reduce TAVI complications, reducing morbidity and mortality and allowing for a safer procedure
    • 

    corecore