1,178 research outputs found

    The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    Full text link
    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source.Comment: 32 pages, 21 figure

    Mekaanisen kokoonpanon harjoitteluun tarkoitetun virtuaalitodellisuuteen perustuvan ohjelman suunnittelu ja toteutus

    Get PDF
    Although virtual assembly has been studied for over 20 years, it has not yet reached a state where it would enjoy widespread usage outside of academia despite the possible cost savings and improvements in the effectiveness of the training. Even though there have been multiple separate studies on virtual assembly, hand-based interaction, and assembly assistance, we have not found applications that would combine all of these to provide a complete assembly training experience. The goal of this thesis was to design and implement a virtual reality application for mechanical assembly training. In our application, we provide a natural user interaction by using a Leap Motion controller, a hand tracking device mounted onto a virtual reality headset. The application was implemented using the Unity game engine and supports both Oculus and SteamVR compatible VR headsets. Unlike most of the previous systems, we combine the use of hand-based interaction, assembly simulation, and context-aware assembly guidance to create an all-in-one VR assembly solution. As a part of our implementation, we propose a new method for assembly guidance and validation that works by matching assemblies built by the user to the assembly the user is supposed to build. Based on the user testing results, there is an interest in this kind of application. Although the inaccuracies with the hand and finger tracking hindered the usability of the application, the users described the application as surprisingly easy to use once they learned how to overcome these issues.Siitä huolimatta, että virtuaalista kokoonpanoa on tutkittu yli 20 vuotta, ja se voisi tarjota sekä kustannussäästöjä että jopa parantaa harjoittelun tehokkuutta, se ei ole vielä saavuttanut vakiintunutta asemaa akateemisen tutkimuksen ulkopuolella. Vaikka virtuaalisesta kokoonpanosta, käsipohjaisesta vuorovaikutuksesta ja kokoonpanon avustamisesta on tehty useita erillisiä tutkimuksia, emme ole löytäneet sovelluksia, jotka yhdistäisivät kaikki nämä kokonaisvaltaisen harjoitusalustan tarjoamiseksi. Tämän diplomityön tarkoituksena oli suunnitella ja toteuttaa virtuaalitodellisuuteen perustuva työkalu mekaanisen kokoonpanon harjoitteluun. Ohjelmamme tarjoaa luonnollisen, käsien seurantaan perustuvan käyttöliittymän hyödyntämällä virtuaalilaseihin kiinnitettyä Leap Motion -ohjainta. Sovellus toteutettiin käyttäen Unity-pelimoottoria ja sovellus tukee sekä Oculus- että SteamVR-yhteensopivia virtuaalitodellisuuslaseja. Toisin kuin useimmat vastaavat järjestelmät, meidän työkalumme yhdistää käsin tapahtuvan interaktion, kokoonpanosimulaation ja kontekstisidonnaiset kokoonpano-ohjeet tarjoten kokonaisvaltaisen sovelluksen virtuaalisen kokoonpanon harjoitteluun. Osana työkaluamme kehitimme uuden menetelmän kokoonpanon aikana tapahtuvien virheiden havainnoimiseen ja kontekstisidonnaisten kokoamisohjeiden muodostamiseen. Kehittämämme menetelmä perustuu vastaavuuksien etsimiseen käyttäjän kokoamien tuotteiden ja tavoitteena olevan tuotteen väliltä. Käyttäjätestauksesta saatujen tulosten perusteella tämänkaltaiselle sovellukselle olisi kysyntää. Vaikka käsienseurantalaitteen epätarkkuus haittasi sovelluksen käytettävyyttä, käyttäjät luonnehtivat sovellusta yllättävän helppokäyttöiseksi opittuaan työskentelemään sovelluksen parissa

    Egocentric Perception of Hands and Its Applications

    Get PDF

    The development of fully automated RULA assessment system based on Computer Vision

    Get PDF
    The purpose of this study was to develop an automated, RULA-based posture assessment system using a deep learning algorithm to estimate RULA scores, including scores for wrist posture, based on images of workplace postures. The proposed posture estimation system reported a mean absolute error (MAE) of 2.86 on the validation dataset obtained by randomly splitting 20% of the original training dataset before data augmentation. The results of the proposed system were compared with those of two experts’ manual evaluation by computing the Intraclass correlation coefficient (ICC), which yielded index values greater than 0.75, thereby confirming good agreement between manual raters and the proposed system. This system will reduce the time required for postural evaluation while producing highly reliable RULA scores that are consistent with those generated by manual approach. Thus, we expect that this study will aid ergonomic experts in conducting RULA-based surveys of occupational postures in workplace conditions

    A PHYSIOCRATIC SYSTEMS FRAMEWORK FOR OPEN SOURCE AGRICULTURAL RESEARCH AND DEVELOPMENT

    Get PDF
    This dissertation presents a new participatory approach to agricultural research and development. It surveys the biological, sociological, economic, and technical landscape and proposes a framework for adaptive management based on the 18th century Physiocratic school of land-based economics. Industrial specialization and heavy emphasis on deductive approaches to science have contributed to the disconnection of large portions of the population from natural systems. Conventional agriculture and agricultural research methods following this pattern have created expensive social, environmental, and economic external costs, while adaptive management and resilient agricultural systems have been hindered by the cost and complexity of quantifying environmental services. However, the convergence of low cost computing, sensors, memory, and resulting data analytic methods, combined with new collaborative tools and social media, have created an exciting open source environment with the potential to engage more people in analyzing and managing our natural environment

    Human behavior understanding for worker-centered intelligent manufacturing

    Get PDF
    “In a worker-centered intelligent manufacturing system, sensing and understanding of the worker’s behavior are the primary tasks, which are essential for automatic performance evaluation & optimization, intelligent training & assistance, and human-robot collaboration. In this study, a worker-centered training & assistant system is proposed for intelligent manufacturing, which is featured with self-awareness and active-guidance. To understand the hand behavior, a method is proposed for complex hand gesture recognition using Convolutional Neural Networks (CNN) with multiview augmentation and inference fusion, from depth images captured by Microsoft Kinect. To sense and understand the worker in a more comprehensive way, a multi-modal approach is proposed for worker activity recognition using Inertial Measurement Unit (IMU) signals obtained from a Myo armband and videos from a visual camera. To automatically learn the importance of different sensors, a novel attention-based approach is proposed to human activity recognition using multiple IMU sensors worn at different body locations. To deploy the developed algorithms to the factory floor, a real-time assembly operation recognition system is proposed with fog computing and transfer learning. The proposed worker-centered training & assistant system has been validated and demonstrated the feasibility and great potential for applying to the manufacturing industry for frontline workers. Our developed approaches have been evaluated: 1) the multi-view approach outperforms the state-of-the-arts on two public benchmark datasets, 2) the multi-modal approach achieves an accuracy of 97% on a worker activity dataset including 6 activities and achieves the best performance on a public dataset, 3) the attention-based method outperforms the state-of-the-art methods on five publicly available datasets, and 4) the developed transfer learning model achieves a real-time recognition accuracy of 95% on a dataset including 10 worker operations”--Abstract, page iv

    Event structures in knowledge, pictures and text

    Get PDF
    This thesis proposes new techniques for mining scripts. Scripts are essential pieces of common sense knowledge that contain information about everyday scenarios (like going to a restaurant), namely the events that usually happen in a scenario (entering, sitting down, reading the menu...), their typical order (ordering happens before eating), and the participants of these events (customer, waiter, food...). Because many conventionalized scenarios are shared common sense knowledge and thus are usually not described in standard texts, we propose to elicit sequential descriptions of typical scenario instances via crowdsourcing over the internet. This approach overcomes the implicitness problem and, at the same time, is scalable to large data collections. To generalize over the input data, we need to mine event and participant paraphrases from the textual sequences. For this task we make use of the structural commonalities in the collected sequential descriptions, which yields much more accurate paraphrases than approaches that do not take structural constraints into account. We further apply the algorithm we developed for event paraphrasing to parallel standard texts for extracting sentential paraphrases and paraphrase fragments. In this case we consider the discourse structure in a text as a sequential event structure. As for event paraphrasing, the structure-aware paraphrasing approach clearly outperforms systems that do not consider discourse structure. As a multimodal application, we develop a new resource in which textual event descriptions are grounded in videos, which enables new investigations on action description semantics and a more accurate modeling of event description similarities. This grounding approach also opens up new possibilities for applying the computed script knowledge for automated event recognition in videos.Die vorliegende Dissertation schlägt neue Techniken zur Berechnung von Skripten vor. Skripte sind essentielle Teile des Allgemeinwissens, die Informationen über alltägliche Szenarien (wie im Restaurant essen) enthalten, nämlich die Ereignisse, die typischerweise in einem Szenario vorkommen (eintreten, sich setzen, die Karte lesen...), deren typische zeitliche Abfolge (man bestellt bevor man isst), und die Teilnehmer der Ereignisse (ein Gast, der Kellner, das Essen,...). Da viele konventionalisierte Szenarien implizit geteiltes Allgemeinwissen sind und üblicherweise nicht detailliert in Texten beschrieben werden, schlagen wir vor, Beschreibungen von typischen Szenario-Instanzen durch sog. “Crowdsourcing” über das Internet zu sammeln. Dieser Ansatz löst das Implizitheits-Problem und lässt sich gleichzeitig zu großen Daten-Sammlungen hochskalieren. Um über die Eingabe-Daten zu generalisieren, müssen wir in den Text-Sequenzen Paraphrasen für Ereignisse und Teilnehmer finden. Hierfür nutzen wir die strukturellen Gemeinsamkeiten dieser Sequenzen, was viel präzisere Paraphrasen-Information ergibt als Standard-Ansätze, die strukturelle Einschränkungen nicht beachten. Die Techniken, die wir für die Ereignis-Paraphrasierung entwickelt haben, wenden wir auch auf parallele Standard-Texte an, um Paraphrasen auf Satz-Ebene sowie Paraphrasen-Fragmente zu extrahieren. Hier betrachten wir die Diskurs-Struktur eines Textes als sequentielle Ereignis-Struktur. Auch hier liefert der strukturell informierte Ansatz klar bessere Ergebnisse als herkömmliche Systeme, die Diskurs-Struktur nicht in die Berechnung mit einbeziehen. Als multimodale Anwendung entwickeln wir eine neue Ressource, in der Text-Beschreibungen von Ereignissen mittels zeitlicher Synchronisierung in Videos verankert sind. Dies ermöglicht neue Ansätze für die Erforschung der Semantik von Ereignisbeschreibungen, und erlaubt außerdem die Modellierung treffenderer Ereignis-Ähnlichkeiten. Dieser Schritt der visuellen Verankerung von Text in Videos eröffnet auch neue Möglichkeiten für die Anwendung des berechneten Skript-Wissen bei der automatischen Ereigniserkennung in Videos

    Real-Time Collaborative Robot Control Using Hand Gestures Recognised By Deep Learning

    Get PDF
    In an ever-changing and demanding world new technologies, which allow more efficient and easier industrial processes, are needed. Furthermore, until now, traditional vision technologies and algorithms have been used in the industrial area. These techniques, even though they achieve good results in simple vision tasks, they are really limited since any change in the processed image affects their performance. For example, in code reading tasks, if the code has a mark or it is not completely visible, the piece with the code would be discarded which leads to losses for the company. These kind of problems can be solved by the implementation of machine learning techniques for vision purposes. Moreover, these techniques learn from example images and even though a perfect performance is difficult to get, machine learning techniques are much more flexible than traditional techniques. Even though the term machine learning was coined for the first time in 1959, until now, these techniques have barely been implemented in the industrial area. They have mostly been used for investigation purposes. Apart from the new vision techniques, new types of robots are being implemented in industrial environments such as collaborative or social robots. On the one hand, collaborative robots allow the workers to work next to or with the robot without any type of physical interference between them. On the other hand, social robots allow an easier communication between the robot and the user which can be applied in different parts of the industry such as introducing the company to new visitors. The present project gathers information in regard to the analysis, training and implementation of a vision artificial neuronal network based software called ViDi Cognex software. By the use of this software, three different vision tasks were trained. The most important one is the hand gesture recognition task since the obtained hand gesture controls the action performed by the YuMi robot, which is programmed in RAPID language. It is believed that the development of the different artificial neuronal networks with industrial purposes can show the applicability of machine learning techniques in an industrial environment. Apart from that, the hand gesture recognition shows an easy way to control the movements of a robot which could be used by a person with no knowledge of robots or programming. To finish, the use of a two arm collaborative robot, could show the potential and versatility of collaborative robots for industrial purposes

    Proceedings of the 2021 DigitalFUTURES

    Get PDF
    This open access book is a compilation of selected papers from 2021 DigitalFUTURES—The 3rd International Conference on Computational Design and Robotic Fabrication (CDRF 2021). The work focuses on novel techniques for computational design and robotic fabrication. The contents make valuable contributions to academic researchers, designers, and engineers in the industry. As well, readers encounter new ideas about understanding material intelligence in architecture
    corecore