57 research outputs found

    Wavelet theory and applications:a literature study

    Get PDF

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    \u3cem\u3eGRASP News\u3c/em\u3e, Volume 6, Number 1

    Get PDF
    A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory, edited by Gregory Long and Alok Gupta

    Autonomisten metsäkoneiden koneaistijärjestelmät

    Get PDF
    A prerequisite for increasing the autonomy of forest machinery is to provide robots with digital situational awareness, including a representation of the surrounding environment and the robot's own state in it. Therefore, this article-based dissertation proposes perception systems for autonomous or semi-autonomous forest machinery as a summary of seven publications. The work consists of several perception methods using machine vision, lidar, inertial sensors, and positioning sensors. The sensors are used together by means of probabilistic sensor fusion. Semi-autonomy is interpreted as a useful intermediary step, situated between current mechanized solutions and full autonomy, to assist the operator. In this work, the perception of the robot's self is achieved through estimation of its orientation and position in the world, the posture of its crane, and the pose of the attached tool. The view around the forest machine is produced with a rotating lidar, which provides approximately equal-density 3D measurements in all directions. Furthermore, a machine vision camera is used for detecting young trees among other vegetation, and sensor fusion of an actuated lidar and machine vision camera is utilized for detection and classification of tree species. In addition, in an operator-controlled semi-autonomous system, the operator requires a functional view of the data around the robot. To achieve this, the thesis proposes the use of an augmented reality interface, which requires measuring the pose of the operator's head-mounted display in the forest machine cabin. Here, this work adopts a sensor fusion solution for a head-mounted camera and inertial sensors. In order to increase the level of automation and productivity of forest machines, the work focuses on scientifically novel solutions that are also adaptable for industrial use in forest machinery. Therefore, all the proposed perception methods seek to address a real existing problem within current forest machinery. All the proposed solutions are implemented in a prototype forest machine and field tested in a forest. The proposed methods include posture measurement of a forestry crane, positioning of a freely hanging forestry crane attachment, attitude estimation of an all-terrain vehicle, positioning a head mounted camera in a forest machine cabin, detection of young trees for point cleaning, classification of tree species, and measurement of surrounding tree stems and the ground surface underneath.Metsäkoneiden autonomia-asteen kasvattaminen edellyttää, että robotilla on digitaalinen tilannetieto sekä ympäristöstä että robotin omasta toiminnasta. Tämän saavuttamiseksi työssä on kehitetty autonomisen tai puoliautonomisen metsäkoneen koneaistijärjestelmiä, jotka hyödyntävät konenäkö-, laserkeilaus- ja inertia-antureita sekä paikannusantureita. Työ liittää yhteen seitsemässä artikkelissa toteutetut havainnointimenetelmät, joissa useiden anturien mittauksia yhdistetään sensorifuusiomenetelmillä. Työssä puoliautonomialla tarkoitetaan hyödyllisiä kuljettajaa avustavia välivaiheita nykyisten mekanisoitujen ratkaisujen ja täyden autonomian välillä. Työssä esitettävissä autonomisen metsäkoneen koneaistijärjestelmissä koneen omaa toimintaa havainnoidaan estimoimalla koneen asentoa ja sijaintia, nosturin asentoa sekä siihen liitetyn työkalun asentoa suhteessa ympäristöön. Yleisnäkymä metsäkoneen ympärille toteutetaan pyörivällä laserkeilaimella, joka tuottaa lähes vakiotiheyksisiä 3D-mittauksia jokasuuntaisesti koneen ympäristöstä. Nuoret puut tunnistetaan muun kasvillisuuden joukosta käyttäen konenäkökameraa. Lisäksi puiden tunnistamisessa ja puulajien luokittelussa käytetään konenäkökameraa ja laserkeilainta yhdessä sensorifuusioratkaisun avulla. Lisäksi kuljettajan ohjaamassa puoliautonomisessa järjestelmässä kuljettaja tarvitsee toimivan tavan ymmärtää koneen tuottaman mallin ympäristöstä. Työssä tämä ehdotetaan toteutettavaksi lisätyn todellisuuden käyttöliittymän avulla, joka edellyttää metsäkoneen ohjaamossa istuvan kuljettajan lisätyn todellisuuden lasien paikan ja asennon mittaamista. Työssä se toteutetaan kypärään asennetun kameran ja inertia-anturien sensorifuusiona. Jotta metsäkoneiden automatisaatiotasoa ja tuottavuutta voidaan lisätä, työssä keskitytään uusiin tieteellisiin ratkaisuihin, jotka soveltuvat teolliseen käyttöön metsäkoneissa. Kaikki esitetyt koneaistijärjestelmät pyrkivät vastaamaan todelliseen olemassa olevaan tarpeeseen nykyisten metsäkoneiden käytössä. Siksi kaikki menetelmät on implementoitu prototyyppimetsäkoneisiin ja tulokset on testattu metsäympäristössä. Työssä esitetyt menetelmät mahdollistavat metsäkoneen nosturin, vapaasti riippuvan työkalun ja ajoneuvon asennon estimoinnin, lisätyn todellisuuden lasien asennon mittaamisen metsäkoneen ohjaamossa, nuorten puiden havaitsemisen reikäperkauksessa, ympäröivien puiden puulajien tunnistuksen, sekä puun runkojen ja maanpinnan mittauksen

    Ultrasound Guidance in Perioperative Care

    Get PDF

    Ultrasound Guidance in Perioperative Care

    Get PDF

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    飛行ロボットにおける人間・ロボットインタラクションの実現に向けて : ユーザー同伴モデルとセンシングインターフェース

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 矢入 健久, 東京大学教授 堀 浩一, 東京大学教授 岩崎 晃, 東京大学教授 土屋 武司, 東京理科大学教授 溝口 博University of Tokyo(東京大学

    Segmentation of surgical tools from laparoscopy images

    Get PDF
    Relatório de projeto de mestrado em Engenharia BiomédicaCirurgias roboticamente assistidas têm vindo a substituir as cirurgias abertas com enorme impacto no tempo de convalescença do paciente e consequentemente em tudo o que isso implica, economia de recursos no sector da saúde e a retoma antecipada das atividades laborais do paciente. Este tipo de cirurgia auxiliada por um sistema robótico é guiado por uma câmara laparoscópica, facultando ao médico uma visão das partes anatómicas do paciente. A fim do cirurgião se encontrar apto para operar este equipamento tem de passar por inúmeras horas de formação, tornando o processo desgastante e dispendioso. Para além do referido, a manipulação dos instrumentos cirúrgicos em concordância com a câmara laparoscópica não é de todo um processo intuitivo, ou seja, os erros de natureza subjetiva não são erradicados. A diretiva desta tese é o desenvolvimento de um sistema automático capaz de segmentar instrumentos cirúrgicos, possibilitando desta forma a monitorização constante da posição dos instrumentos. Para tal foram explorados diferentes modelos de aprendizagem automática. Numa segunda fase, foram considerados métodos que pudessem ser incorporados no modelo base. Tendo-se encontrado uma resposta, partiu-se para a comparação dos modelos previamente selecionados, com o modelo base e ainda com o otimizado. Numa terceira abordagem, de forma a melhorar as métricas que serviram de comparação, procurou-se por soluções alternativas, nomeadamente a geração de dados artificiais. Neste ponto, deparou-se com duas possibilidades, uma baseada em sistemas de aprendizagem autónoma por competição e outra em sistemas de aprendizagem de síntese de imagens a partir de ruido com densidade espectral sucessivamente incrementada. Ambas as abordagens permitiram o aumento da base de dados tendo-se aferido a sua eficácia por comparação do efeito do aumento de dados nos sistemas de segmentação. O sistema proposto pode vir a ser implementado em cirurgias roboticamente assistidas, necessitando apenas de mínimas alterações.Robotic-assisted surgeries have been replacing open surgeries with a significant impact on patient recovery time, and consequently, on various aspects such as healthcare resource savings and the early resumption of the patient's work activities. This type of surgery, assisted by a robotic system, is guided by a laparoscopic camera, providing the surgeon with a view of the patient's anatomical structures. To operate this equipment, surgeons must undergo numerous hours of training, making the process exhaustive and costly. In addition, manipulating surgical instruments in coordination with the laparoscopic camera is not an intuitive process, meaning errors of a subjective nature are not eliminated. The objective of this thesis is the development of an automated system capable of segmenting surgical instruments, thereby enabling constant monitoring of their positions. Various machine learning models were explored to address this issue. In a second phase, methods that could be incorporated into the base model were considered. Once a solution was found, a comparison was made between the previously selected models, the base model, and the optimized model. In a third approach, with the aim of improving the comparison metrics, alternative solutions were sought, including the generation of synthetic data. At this point, two possibilities were encountered, one based on autonomous learning systems through competition and the other on image synthesis learning systems from progressively increasing noise spectral density. Both approaches expanded the available database, and their effectiveness was evaluated by comparing the impact of data augmentation on segmentation systems. The proposed system can potentially be implemented in robotic-assisted surgeries with minimal modifications
    corecore