32 research outputs found
ΠΡΠΊΡΡΡΡΠ²Π΅Π½Π½ΡΠΉ ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡ ΠΏΡΠΈ ΠΊΠΎΠ»ΠΎΡΠ΅ΠΊΡΠ°Π»ΡΠ½ΠΎΠΌ ΡΠ°ΠΊΠ΅: ΠΎΠ±Π·ΠΎΡ
The study objective: the study objective is to examine the use of artificial intelligence (AI) in the diagnosis, treatment, and prognosis of Colorectal Cancer (CRC) and discuss the future potential of AI in CRC. Material and Methods. The Web of Science, Scopus, PubMed, Medline, and eLIBRARY databases were used to search for the publications. A study on the application of Artificial Intelligence (AI) to the diagnosis, treatment, and prognosis of Colorectal Cancer (CRC) was discovered in more than 100 sources. In the review, data from 83 articles were incorporated. Results. The review article explores the use of artificial intelligence (AI) in medicine, specifically focusing on its applications in colorectal cancer (CRC). It discusses the stages of AI development for CRC, including molecular understanding, image-based diagnosis, drug design, and individualized treatment. The benefits of AI in medical image analysis are highlighted, improving diagnosis accuracy and inspection quality. Challenges in AI development are addressed, such as data standardization and the interpretability of machine learning algorithms. The potential of AI in treatment decision support, precision medicine, and prognosis prediction is discussed, emphasizing the role of AI in selecting optimal treatments and improving surgical precision. Ethical and regulatory considerations in integrating AI are mentioned, including patient trust, data security, and liability in AI-assisted surgeries. The review emphasizes the importance of an AI standard system, dataset standardization, and integrating clinical knowledge into AI algorithms. Overall, the article provides an overview of the current research on AI in CRC diagnosis, treatment, and prognosis, discussing its benefits, challenges, and future prospects in improving medical outcomes.Π¦Π΅Π»Ρ ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Π½ΠΈΡ - ΠΎΡΠ΅Π½ΠΊΠ° Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΠΎΡΡΠ΅ΠΉ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΈΡΠΊΡΡΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΠ° (ΠΠ) Π² Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΠ΅, Π»Π΅ΡΠ΅Π½ΠΈΠΈ ΠΈ ΠΏΡΠΎΠ³Π½ΠΎΠ·ΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ ΠΊΠΎΠ»ΠΎΡΠ΅ΠΊΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΊΠ° (ΠΠ Π ), Π° ΡΠ°ΠΊΠΆΠ΅ ΠΎΠ±ΡΡΠΆΠ΄Π΅Π½ΠΈΠ΅ ΠΏΠΎΡΠ΅Π½ΡΠΈΠ°Π»Π° ΠΠ Π² Π»Π΅ΡΠ΅Π½ΠΈΠΈ ΠΠ Π . ΠΠ°ΡΠ΅ΡΠΈΠ°Π» ΠΈ ΠΌΠ΅ΡΠΎΠ΄Ρ. ΠΡΠΎΠ²Π΅Π΄Π΅Π½ ΠΏΠΎΠΈΡΠΊ Π½Π°ΡΡΠ½ΡΡ
ΠΏΡΠ±Π»ΠΈΠΊΠ°ΡΠΈΠΉ Π² ΠΏΠΎΠΈΡΠΊΠΎΠ²ΡΡ
ΡΠΈΡΡΠ΅ΠΌΠ°Ρ
Web of Science, Scopus, PubMed, Medline ΠΈ eLIBRARY. ΠΡΠ»ΠΎ ΠΏΡΠΎΡΠΌΠΎΡΡΠ΅Π½ΠΎ Π±ΠΎΠ»Π΅Π΅ 100 ΠΈΡΡΠΎΡΠ½ΠΈΠΊΠΎΠ² ΠΏΠΎ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ ΠΠ Π΄Π»Ρ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΠΈ, Π»Π΅ΡΠ΅Π½ΠΈΡ ΠΈ ΠΏΡΠΎΠ³Π½ΠΎΠ·ΠΈΡΠΎΠ²Π°Π½ΠΈΡ ΠΠ Π . Π ΠΎΠ±Π·ΠΎΡ Π²ΠΊΠ»ΡΡΠ΅Π½Ρ Π΄Π°Π½Π½ΡΠ΅ ΠΈΠ· 83 ΡΡΠ°ΡΠ΅ΠΉ. Π Π΅Π·ΡΠ»ΡΡΠ°ΡΡ. ΠΡΠΎΠ²Π΅Π΄Π΅Π½ Π°Π½Π°Π»ΠΈΠ· Π»ΠΈΡΠ΅ΡΠ°ΡΡΡΡ, ΠΏΠΎΡΠ²ΡΡΠ΅Π½Π½ΠΎΠΉ ΠΏΡΠΈΠΌΠ΅Π½Π΅Π½ΠΈΡ ΠΈΡΠΊΡΡΡΡΠ²Π΅Π½Π½ΠΎΠ³ΠΎ ΠΈΠ½ΡΠ΅Π»Π»Π΅ΠΊΡΠ° Π² ΠΌΠ΅Π΄ΠΈΡΠΈΠ½Π΅, ΠΎΡΠΎΠ±ΠΎΠ΅ Π²Π½ΠΈΠΌΠ°Π½ΠΈΠ΅ ΡΠ΄Π΅Π»Π΅Π½ΠΎ Π΅Π³ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ ΠΏΡΠΈ ΠΊΠΎΠ»ΠΎΡΠ΅ΠΊΡΠ°Π»ΡΠ½ΠΎΠΌ ΡΠ°ΠΊΠ΅. ΠΠ±ΡΡΠΆΠ΄Π°ΡΡΡΡ ΡΡΠ°ΠΏΡ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΠΠ ΠΏΡΠΈ ΠΠ Π , Π²ΠΊΠ»ΡΡΠ°Ρ ΠΌΠΎΠ»Π΅ΠΊΡΠ»ΡΡΠ½ΡΡ Π²Π΅ΡΠΈΡΠΈΠΊΠ°ΡΠΈΡ, Π»ΡΡΠ΅Π²ΡΡ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΡ, ΡΠ°Π·ΡΠ°Π±ΠΎΡΠΊΡ Π»Π΅ΠΊΠ°ΡΡΡΠ² ΠΈ ΠΈΠ½Π΄ΠΈΠ²ΠΈΠ΄ΡΠ°Π»ΡΠ½ΠΎΠ΅ Π»Π΅ΡΠ΅Π½ΠΈΠ΅. ΠΠΎΠ΄ΡΠ΅ΡΠΊΠ½ΡΡΡ ΠΏΡΠ΅ΠΈΠΌΡΡΠ΅ΡΡΠ²Π° ΠΠ Π² Π°Π½Π°Π»ΠΈΠ·Π΅ ΠΌΠ΅Π΄ΠΈΡΠΈΠ½ΡΠΊΠΈΡ
ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠΉ, ΡΠ°ΠΊΠΈΡ
ΠΊΠ°ΠΊ ΠΠ’, ΠΠ Π’ ΠΈ ΠΠΠ’, ΡΡΠΎ ΠΏΠΎΠ²ΡΡΠ°Π΅Ρ ΡΠΎΡΠ½ΠΎΡΡΡ Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΠΈ. Π Π°ΡΡΠΌΠ°ΡΡΠΈΠ²Π°ΡΡΡΡ ΡΠ°ΠΊΠΈΠ΅ ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ ΡΠ°Π·Π²ΠΈΡΠΈΡ ΠΠ, ΠΊΠ°ΠΊ ΡΡΠ°Π½Π΄Π°ΡΡΠΈΠ·Π°ΡΠΈΡ Π΄Π°Π½Π½ΡΡ
ΠΈ ΠΈΠ½ΡΠ΅ΡΠΏΡΠ΅ΡΠΈΡΡΠ΅ΠΌΠΎΡΡΡ Π°Π»Π³ΠΎΡΠΈΡΠΌΠΎΠ² ΠΌΠ°ΡΠΈΠ½Π½ΠΎΠ³ΠΎ ΠΎΠ±ΡΡΠ΅Π½ΠΈΡ. ΠΠΎΠ΄ΡΠ΅ΡΠΊΠΈΠ²Π°Π΅ΡΡΡ ΡΠΎΠ»Ρ ΠΠ Π² Π²ΡΠ±ΠΎΡΠ΅ ΠΎΠΏΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠΉ ΡΠ°ΠΊΡΠΈΠΊΠΈ Π»Π΅ΡΠ΅Π½ΠΈΡ ΠΈ ΠΏΠΎΠ²ΡΡΠ΅Π½ΠΈΠΈ ΡΡΡΠ΅ΠΊΡΠΈΠ²Π½ΠΎΡΡΠΈ Ρ
ΠΈΡΡΡΠ³ΠΈΡΠ΅ΡΠΊΠΎΠ³ΠΎ Π²ΠΌΠ΅ΡΠ°ΡΠ΅Π»ΡΡΡΠ²Π°. Π£ΡΠΈΡΡΠ²Π°ΡΡΡΡ ΡΡΠΈΡΠ΅ΡΠΊΠΈΠ΅ ΠΈ Π½ΠΎΡΠΌΠ°ΡΠΈΠ²Π½ΡΠ΅ Π°ΡΠΏΠ΅ΠΊΡΡ ΠΠ, Π²ΠΊΠ»ΡΡΠ°Ρ Π΄ΠΎΠ²Π΅ΡΠΈΠ΅ ΠΏΠ°ΡΠΈΠ΅Π½ΡΠΎΠ², Π±Π΅Π·ΠΎΠΏΠ°ΡΠ½ΠΎΡΡΡ Π΄Π°Π½Π½ΡΡ
ΠΈ ΠΎΡΠ²Π΅ΡΡΡΠ²Π΅Π½Π½ΠΎΡΡΡ Π² ΠΏΡΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠΈ ΠΎΠΏΠ΅ΡΠ°ΡΠΈΠΉ Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΠ. ΠΠ±ΡΡΠΆΠ΄Π°ΡΡΡΡ ΠΏΡΠ΅ΠΈΠΌΡΡΠ΅ΡΡΠ²Π° ΠΠ Π² Π΄ΠΈΠ°Π³Π½ΠΎΡΡΠΈΠΊΠ΅, Π»Π΅ΡΠ΅Π½ΠΈΠΈ ΠΈ ΠΏΡΠΎΠ³Π½ΠΎΠ·ΠΈΡΠΎΠ²Π°Π½ΠΈΠΈ ΠΊΠΎΠ»ΠΎΡΠ΅ΠΊΡΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΡΠ°ΠΊΠ°, ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ ΠΈ ΠΏΠ΅ΡΡΠΏΠ΅ΠΊΡΠΈΠ²Ρ ΡΠ»ΡΡΡΠ΅Π½ΠΈΡ ΡΠ΅Π·ΡΠ»ΡΡΠ°ΡΠΎΠ² Π»Π΅ΡΠ΅Π½ΠΈΡ
Diseases of the Abdomen and Pelvis 2018-2021: Diagnostic Imaging - IDKD Book
Gastrointestinal disease; PET/CT; Radiology; X-ray; IDKD; Davo
New Techniques in Gastrointestinal Endoscopy
As result of progress, endoscopy has became more complex, using more sophisticated devices and has claimed a special form. In this moment, the gastroenterologist performing endoscopy has to be an expert in macroscopic view of the lesions in the gut, with good skills for using standard endoscopes, with good experience in ultrasound (for performing endoscopic ultrasound), with pathology experience for confocal examination. It is compulsory to get experience and to have patience and attention for the follow-up of thousands of images transmitted during capsule endoscopy or to have knowledge in physics necessary for autofluorescence imaging endoscopy. Therefore, the idea of an endoscopist has changed. Examinations mentioned need a special formation, a superior level of instruction, accessible to those who have already gained enough experience in basic diagnostic endoscopy. This is the reason for what these new issues of endoscopy are presented in this book of New techniques in Gastrointestinal Endoscopy
Colonoscopy and Colorectal Cancer Screening
Colorectal cancer (CRC) represents a major public health problem worldwide. Fortunately most CRCs originate from a precursor lesion, the adenoma, which is accessible and removable. This is the rationale for CRC screening programs, which are aimed to diagnose CRC at an early stage or even better to detect and resect the advanced adenoma before CRC has developed. In this background colonoscopy emerges as the main tool to achieve these goals with recent evidence supporting its role in CRC prevention. This book deals with several topics to be faced when implementing a CRC screening program. The interested reader will learn about the rationale and challenges of implementing such a program, the management of the detected lesions, the prevention of complications of colonoscopy, and finally the use of other screening modalities that are emerging as valuable alternatives. The relevance of the topics covered in it and the updated evidence included by the authors turn this book into a very useful tool to introduce the reader in this amazing and evolving field
Mass detection in digital breast tomosynthesis: Deep convolutional neural network with transfer learning from mammography
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/135545/1/mp7345_am.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/135545/2/mp7345.pd
Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning
Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons
Learning-based depth and pose prediction for 3D scene reconstruction in endoscopy
Colorectal cancer is the third most common cancer worldwide. Early detection and treatment of pre-cancerous tissue during colonoscopy is critical to improving prognosis. However, navigating within the colon and inspecting the endoluminal tissue comprehensively are challenging, and success in both varies based on the endoscopist's skill and experience. Computer-assisted interventions in colonoscopy show much promise in improving navigation and inspection. For instance, 3D reconstruction of the colon during colonoscopy could promote more thorough examinations and increase adenoma detection rates which are associated with improved survival rates. Given the stakes, this thesis seeks to advance the state of research from feature-based traditional methods closer to a data-driven 3D reconstruction pipeline for colonoscopy.
More specifically, this thesis explores different methods that improve subtasks of learning-based 3D reconstruction. The main tasks are depth prediction and camera pose estimation. As training data is unavailable, the author, together with her co-authors, proposes and publishes several synthetic datasets and promotes domain adaptation models to improve applicability to real data. We show, through extensive experiments, that our depth prediction methods produce more robust results than previous work. Our pose estimation network trained on our new synthetic data outperforms self-supervised methods on real sequences. Our box embeddings allow us to interpret the geometric relationship and scale difference between two images of the same surface without the need for feature matches that are often unobtainable in surgical scenes. Together, the methods introduced in this thesis help work towards a complete, data-driven 3D reconstruction pipeline for endoscopy