172 research outputs found

    Real-Time Detection and Tracking using Wireless Sensor Networks (Information Sheet)

    No full text
    To develop and deploy a detection and tracking system based on wireless sensor networks. Real-Time detection and tracking is achieved using Wireless Sensor Networks Hardware. The system is envisioned to be able to effectively handle multiple arbitrarily moving targets

    Accurate Range-Only Tracking in Wireless Sensor Networks

    No full text
    This work presents initial results from a novel Range-Only Tracking System tailored for implementation in Wireless Sensor Networks. The system utilizes range estimates from a number of anchor nodes, positioned in known locations to infer the trace and velocity of a moving target. To include support for manoeuvring targets, the target’s movement is modeled using a multiple model state-space representation. A Particle Filter inspired tracking algorithm operates on the acquired ranging data to online estimate the target's position and two-axis velocity [1]. Preliminary results from simulating the system under realistic conditions, reveal that good accuracy (<10m) can be achieved, even under cluttered conditions

    Widening siamese architectures for stereo matching

    Get PDF
    Computational stereo is one of the classical problems in computer vision. Numerous algorithms and solutions have been reported in recent years focusing on developing methods for computing similarity, aggregating it to obtain spatial support and finally optimizing an energy function to find the final disparity. In this paper, we focus on the feature extraction component of stereo matching architecture and we show standard CNNs operation can be used to improve the quality of the features used to find point correspondences. Furthermore, we use a simple space aggregation that hugely simplifies the correlation learning problem, allowing us to better evaluate the quality of the features extracted. Our results on benchmark data are compelling and show promising potential even without refining the solution

    On the detection of myocardial scar based on ECG/VCG analysis

    No full text
    In this paper, we address the problem of detecting the presence of myocardial scar from standard ECG/VCG recordings, giving effort to develop a screening system for the early detection of scar in the point-of-care. Based on the pathophysiological implications of scarred myocardium, which results in disordered electrical conduction, we have implemented four distinct ECG signal processing methodologies in order to obtain a set of features that can capture the presence of myocardial scar. Two of these methodologies: a.) the use of a template ECG heartbeat, from records with scar absence coupled with Wavelet coherence analysis and b.) the utilization of the VCG are novel approaches for detecting scar presence. Following, the pool of extracted features is utilized to formulate an SVM classification model through supervised learning. Feature selection is also employed to remove redundant features and maximize the classifier's performance. Classification experiments using 260 records from three different databases reveal that the proposed system achieves 89.22% accuracy when applying 10- fold cross validation, and 82.07% success rate when testing it on databases with different inherent characteristics with similar levels of sensitivity (76%) and specificity (87.5%)

    Keep Your Eye on the Best: Contrastive Regression Transformer for Skill Assessment in Robotic Surgery

    Get PDF
    This letter proposes a novel video-based, contrastive regression architecture, Contra-Sformer, for automated surgical skill assessment in robot-assisted surgery. The proposed framework is structured to capture the differences in the surgical performance, between a test video and a reference video which represents optimal surgical execution. A feature extractor combining a spatial component (ResNet-18), supervised on frame-level with gesture labels, and a temporal component (TCN), generates spatio-temporal feature matrices of the test and reference videos. These are then fed into an action-aware Transformer with multi-head attention that produces inter-video contrastive features at frame level, representative of the skill similarity/deviation between the two videos. Moments of sub-optimal performance can be identified and temporally localized in the obtained feature vectors, which are ultimately used to regress the manually assigned skill scores. Validated on the JIGSAWS dataset, Contra-Sformer achieves competitive performance (Spearman 0.65 - 0.89), with a normalized mean absolute error between 5.8% - 13.4% on all tasks and across validation setups. Source code and models are available at https://github.com/anastadimi/Contra-Sformer.git

    Fligh phenology of Palpita unionalis (Hübn.) (Lepidoptera, Pyralidae) in the north-east of Portugal.

    Get PDF

    Computer Vision in the Surgical Operating Room

    Get PDF
    Background: Multiple types of surgical cameras are used in modern surgical practice and provide a rich visual signal that is used by surgeons to visualize the clinical site and make clinical decisions. This signal can also be used by artificial intelligence (AI) methods to provide support in identifying instruments, structures, or activities both in real-time during procedures and postoperatively for analytics and understanding of surgical processes. Summary: In this paper, we provide a succinct perspective on the use of AI and especially computer vision to power solutions for the surgical operating room (OR). The synergy between data availability and technical advances in computational power and AI methodology has led to rapid developments in the field and promising advances. Key Messages: With the increasing availability of surgical video sources and the convergence of technologiesaround video storage, processing, and understanding, we believe clinical solutions and products leveraging vision are going to become an important component of modern surgical capabilities. However, both technical and clinical challenges remain to be overcome to efficiently make use of vision-based approaches into the clinic

    RCM-SLAM: Visual localisation and mapping under remote centre of motion constraints

    Get PDF
    In robotic surgery the motion of instruments and the laparoscopic camera is constrained by their insertion ports, i. e. a remote centre of motion (RCM). We propose a Simultaneous Localisation and Mapping (SLAM) approach that estimates laparoscopic camera motion under RCM constraints. To achieve this we derive a minimal solver for the absolute camera pose given two 2D-3D point correspondences (RCMPnP) and also a bundle adjustment optimiser that refines camera poses within an RCM-constrained parameterisation. These two methods are used together with previous work on relative pose estimation under RCM [1] to assemble a SLAM pipeline suitable for robotic surgery. Our simulations show that RCM-PnP outperforms conventional PnP for a wide noise range in the RCM position. Results with video footage from a robotic prostatectomy show that RCM constraints significantly improve camera pose estimatio
    corecore