4 research outputs found

    SNR-Walls in Eigenvalue-based Spectrum Sensing

    Full text link
    Various spectrum sensing approaches have been shown to suffer from a so-called SNR-wall, an SNR value below which a detector cannot perform robustly no matter how many observations are used. Up to now, the eigenvalue-based maximum-minimum-eigenvalue (MME) detector has been a notable exception. For instance, the model uncertainty of imperfect knowledge of the receiver noise power, which is known to be responsible for the energy detector's fundamental limits, does not adversely affect the MME detector's performance. While additive white Gaussian noise (AWGN) is a standard assumption in wireless communications, it is not a reasonable one for the MME detector. In fact, in this work we prove that uncertainty in the amount of noise coloring does lead to an SNR-wall for the MME detector. We derive a lower bound on this SNR-wall and evaluate it for example scenarios. The findings are supported by numerical simulations.Comment: 17 pages, 3 figures, submitted to EURASIP Journal on Wireless Communications and Networkin

    Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark

    Get PDF
    Purpose: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. Methods: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. Results: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). Conclusion: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery
    corecore