1,084 research outputs found
Improving medical image perception by hierarchical clustering based segmentation
It has been well documented that radiologists' performance is not perfect: they make both false positive and false negative decisions. For example, approximately thirty percent of early lung cancer is missed on chest radiographs when the evidence is clearly visible in retrospect. Currently computer-aided detection (CAD) uses software, designed to reduce errors by drawing radiologists' attention to possible abnormalities by placing prompts on images. Alberdi et al examined the effects of CAD prompts on performance, comparing the negative effect of no prompt on a cancer case with prompts on a normal case. They showed that no prompt on a cancer case can have a detrimental effect on reader sensitivity and that the reader performs worse than if the reader was not using CAD. This became particularly apparent when difficult cases were being read. They suggested that the readers were using CAD as a decision making tool instead of a prompting aid. They conclude that "incorrect CAD can have a detrimental effect on human decisions". The goal of this paper is to explore the possibility of using hierarchical clustering based segmentation (HSC), as a perceptual aid, to improve the performance of the reader
A Review of the Family of Artificial Fish Swarm Algorithms: Recent Advances and Applications
The Artificial Fish Swarm Algorithm (AFSA) is inspired by the ecological
behaviors of fish schooling in nature, viz., the preying, swarming, following
and random behaviors. Owing to a number of salient properties, which include
flexibility, fast convergence, and insensitivity to the initial parameter
settings, the family of AFSA has emerged as an effective Swarm Intelligence
(SI) methodology that has been widely applied to solve real-world optimization
problems. Since its introduction in 2002, many improved and hybrid AFSA models
have been developed to tackle continuous, binary, and combinatorial
optimization problems. This paper aims to present a concise review of the
family of AFSA, encompassing the original ASFA and its improvements,
continuous, binary, discrete, and hybrid models, as well as the associated
applications. A comprehensive survey on the AFSA from its introduction to 2012
can be found in [1]. As such, we focus on a total of {\color{blue}123} articles
published in high-quality journals since 2013. We also discuss possible AFSA
enhancements and highlight future research directions for the family of
AFSA-based models.Comment: 37 pages, 3 figure
Improving medical image perception by hierarchical clustering based segmentation
It has been well documented that radiologists' performance is not perfect: they make both false positive and false negative decisions. For example, approximately thirty percent of early lung cancer is missed on chest radiographs when the evidence is clearly visible in retrospect [1]. Currently Computer-Aided Detection (CAD) uses software, designed to reduce errors by drawing radiologists' attention to possible abnormalities by placing prompts on images. Alberdi et al examined the effects of CAD prompts on performance, comparing the negative effect of no prompt on a cancer case with prompts on a normal case. They showed that no prompt on a cancer case can have a detrimental effect on reader sensitivity and that the reader performs worse than if the reader was not using CAD. This became particularly apparent when difficult cases were being read. They suggested that the readers were using CAD as a decision making tool instead of a prompting aid. They conclude that "incorrect CAD can have a detrimental effect on human decisions" [2]. The goal of this paper is to explore the possibility of using Hierarchical Clustering based Segmentation (HCS) [3], as a perceptual aid, to improve the performance of the reader
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Physics-Informed Computer Vision: A Review and Perspectives
Incorporation of physical information in machine learning frameworks are
opening and transforming many application domains. Here the learning process is
augmented through the induction of fundamental knowledge and governing physical
laws. In this work we explore their utility for computer vision tasks in
interpreting and understanding visual data. We present a systematic literature
review of formulation and approaches to computer vision tasks guided by
physical laws. We begin by decomposing the popular computer vision pipeline
into a taxonomy of stages and investigate approaches to incorporate governing
physical equations in each stage. Existing approaches in each task are analyzed
with regard to what governing physical processes are modeled, formulated and
how they are incorporated, i.e. modify data (observation bias), modify networks
(inductive bias), and modify losses (learning bias). The taxonomy offers a
unified view of the application of the physics-informed capability,
highlighting where physics-informed learning has been conducted and where the
gaps and opportunities are. Finally, we highlight open problems and challenges
to inform future research. While still in its early days, the study of
physics-informed computer vision has the promise to develop better computer
vision models that can improve physical plausibility, accuracy, data efficiency
and generalization in increasingly realistic applications
Quantification and segmentation of breast cancer diagnosis: efficient hardware accelerator approach
The mammography image eccentric area is the breast density percentage
measurement. The technical challenge of quantification in radiology leads to
misinterpretation in screening. Data feedback from society, institutional, and industry
shows that quantification and segmentation frameworks have rapidly become the
primary methodologies for structuring and interpreting mammogram digital images.
Segmentation clustering algorithms have setbacks on overlapping clusters, proportion,
and multidimensional scaling to map and leverage the data. In combination,
mammogram quantification creates a long-standing focus area. The algorithm
proposed must reduce complexity and target data points distributed in iterative, and
boost cluster centroid merged into a single updating process to evade the large storage
requirement. The mammogram database's initial test segment is critical for evaluating
performance and determining the Area Under the Curve (AUC) to alias with medical
policy. In addition, a new image clustering algorithm anticipates the need for largescale
serial and parallel processing. There is no solution on the market, and it is
necessary to implement communication protocols between devices. Exploiting and
targeting utilization hardware tasks will further extend the prospect of improvement in
the cluster. Benchmarking their resources and performance is required. Finally, the
medical imperatives cluster was objectively validated using qualitative and
quantitative inspection. The proposed method should overcome the technical
challenges that radiologists face
Advanced Computational Methods for Oncological Image Analysis
[Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.
Neural Network Methods for Radiation Detectors and Imaging
Recent advances in image data processing through machine learning and
especially deep neural networks (DNNs) allow for new optimization and
performance-enhancement schemes for radiation detectors and imaging hardware
through data-endowed artificial intelligence. We give an overview of data
generation at photon sources, deep learning-based methods for image processing
tasks, and hardware solutions for deep learning acceleration. Most existing
deep learning approaches are trained offline, typically using large amounts of
computational resources. However, once trained, DNNs can achieve fast inference
speeds and can be deployed to edge devices. A new trend is edge computing with
less energy consumption (hundreds of watts or less) and real-time analysis
potential. While popularly used for edge computing, electronic-based hardware
accelerators ranging from general purpose processors such as central processing
units (CPUs) to application-specific integrated circuits (ASICs) are constantly
reaching performance limits in latency, energy consumption, and other physical
constraints. These limits give rise to next-generation analog neuromorhpic
hardware platforms, such as optical neural networks (ONNs), for high parallel,
low latency, and low energy computing to boost deep learning acceleration
- …