28 research outputs found

    Искусственный интеллект в офтальмологии. Нужны ли нам калькуляторы риска развития и прогрессирования глаукомы?

    Get PDF
    Artificial intelligence (AI) is rapidly entering modern medical practice. Many routine clinical tasks, from imaging and automated diagnostics to robotic surgery, cannot be imagined without the use of AI. Neural networks show impressive results when analyzing a large amount of data obtained from standard automated perimetry, optical coherence tomography (OCT) and fundus photography. Currently, both in Russia and abroad mathematical algorithms are being developed that allow detection of glaucoma based on certain signs. This article analyzes the advantages and disadvantages of employing artificial intelligence in ophthalmological practice, discusses the need for careful selection of the criteria and their influence on the accuracy of calculators, considers the specifics of using mathematical analysis in suspected glaucoma, as well as in an already established diagnosis. The article also provides clinical examples of the use of glaucoma risk calculator in the routine practice of an ophthalmologist.Искусственный интеллект (ИИ) стремительно входит в современную медицинскую практику. Многие повседневные клинические задачи, от визуализации и автоматизированной диагностики до роботизированной хирургии невозможно сегодня представить без использования ИИ. Нейронные сети показывают впечатляющие результаты при анализе большого массива данных, полученных при компьютерной периметрии, оптической когерентной томографии, фотографировании глазного дна и др. В настоящее время в России и за рубежом разрабатываются математические алгоритмы, позволяющие по тем или иным признакам определять наличие глаукомного процесса. В статье анализируются плюсы и минусы использования ИИ в офтальмологической практике. Обсуждается необходимость тщательного подбора критериев и их влияние на точность работы калькуляторов. Особенности использования математического анализа при подозрении на глаукому и при уже установленном диагнозе. Приводятся клинические примеры использования калькулятора риска развития глаукомы в рутинной практике офтальмолога.

    Искусственный интеллект и нейросети в диагностике глаукомы

    Get PDF
    Early diagnosis of glaucoma and objective analysis of data obtained from instrumental study methods is one of the most important problems in ophthalmology. Modern state of technological development allows implementing artificial intelligence and neural networks in the diagnosis and treatment of glaucoma. Special software helps perform perimetry using portable devices, which reduces the workload for medical facilities and lowers the costs of the procedure. Mathematical models allow evaluating the risk of glaucoma progression based on instrumental findings. Artificial intelligence allows assessing the results of Goldman and Maklakov tonometry and determining the state of disease progression by analyzing a series of 2D and 3D data (scan images of optic nerve head, static perimetry etc.) separately, as well as in complex analysis of data from various devices.Одной из важнейших проблем офтальмологии является ранняя диагностика глаукомы и объективный анализ данных инструментальных исследований. Современный этап развития технологий позволяет применить возможности искусственного интеллекта и нейросетей в диагностике и лечении глаукомы. Так, специальное программное обеспечение позволяет выполнять периметрию с помощью портативных устройств, что снижает нагрузку на лечебные учреждения и удешевляет исследование. Математические модели позволяют оценить риск прогрессирования заболевания на основе данных исследований. Искусственный интеллект позволяет оценить результат выполнения тонометрии по Гольдману и Маклакову и определить наличие прогрессирования по серии как двумерных, так и трехмерных данных (сканы диска зрительного нерва, статическая периметрия и т.д.) как поодиночке, так и при комплексной оценке данных с различных приборов

    Development of an augmented reality guided computer assisted orthopaedic surgery system

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively.This body of work documents the developed of a proof of concept augmented reality guided computer assisted orthopaedic surgery system – ARgCAOS. After initial investigation a visible-spectrum single camera tool-mounted tracking system based upon fiducial planar markers was implemented. The use of visible-spectrum cameras, as opposed to the infra-red cameras typically used by surgical tracking systems, allowed the captured image to be streamed to a display in an intelligible fashion. The tracking information defined the location of physical objects relative to the camera. Therefore, this information allowed virtual models to be overlaid onto the camera image. This produced a convincing augmented experience, whereby the virtual objects appeared to be within the physical world, moving with both the camera and markers as expected of physical objects. Analysis of the first generation system identified both accuracy and graphical inadequacies, prompting the development of a second generation system. This too was based upon a tool-mounted fiducial marker system, and improved performance to near-millimetre probing accuracy. A resection system was incorporated into the system, and utilising the tracking information controlled resection was performed, producing sub-millimetre accuracies. Several complications resulted from the tool-mounted approach. Therefore, a third generation system was developed. This final generation deployed a stereoscopic visible-spectrum camera system affixed to a head-mounted display worn by the user. The system allowed the augmentation of the natural view of the user, providing convincing and immersive three dimensional augmented guidance, with probing and resection accuracies of 0.55±0.04 and 0.34±0.04 mm, respectively

    OPTICAL NAVIGATION TECHNIQUES FOR MINIMALLY INVASIVE ROBOTIC SURGERIES

    Get PDF
    Minimally invasive surgery (MIS) involves small incisions in a patient's body, leading to reduced medical risk and shorter hospital stays compared to open surgeries. For these reasons, MIS has experienced increased demand across different types of surgery. MIS sometimes utilizes robotic instruments to complement human surgical manipulation to achieve higher precision than can be obtained with traditional surgeries. Modern surgical robots perform within a master-slave paradigm, in which a robotic slave replicates the control gestures emanating from a master tool manipulated by a human surgeon. Presently, certain human errors due to hand tremors or unintended acts are moderately compensated at the tool manipulation console. However, errors due to robotic vision and display to the surgeon are not equivalently addressed. Current vision capabilities within the master-slave robotic paradigm are supported by perceptual vision through a limited binocular view, which considerably impacts the hand-eye coordination of the surgeon and provides no quantitative geometric localization for robot targeting. These limitations lead to unexpected surgical outcomes, and longer operating times compared to open surgery. To improve vision capabilities within an endoscopic setting, we designed and built several image guided robotic systems, which obtained sub-millimeter accuracy. With this improved accuracy, we developed a corresponding surgical planning method for robotic automation. As a demonstration, we prototyped an autonomous electro-surgical robot that employed quantitative 3D structural reconstruction with near infrared registering and tissue classification methods to localize optimal targeting and suturing points for minimally invasive surgery. Results from validation of the cooperative control and registration between the vision system in a series of in vivo and in vitro experiments are presented and the potential enhancement to autonomous robotic minimally invasive surgery by utilizing our technique will be discussed

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    NASA Tech Briefs, October 1991

    Get PDF
    Topics: New Product Ideas; NASA TU Services; Electronic Components and Circuits; Electronic Systems; Physical Sciences; Materials; Computer Programs; Mechanics; Machinery; Fabrication Technology; Mathematics and Information Sciences; Life Sciences

    Technology 2003: The Fourth National Technology Transfer Conference and Exposition, volume 2

    Get PDF
    Proceedings from symposia of the Technology 2003 Conference and Exposition, Dec. 7-9, 1993, Anaheim, CA, are presented. Volume 2 features papers on artificial intelligence, CAD&E, computer hardware, computer software, information management, photonics, robotics, test and measurement, video and imaging, and virtual reality/simulation
    corecore