37 research outputs found
Advanced Techniques for Computational and Information Sciences
New techniques in computational and information sciences have played an important role in keeping advancing the so called knowledge economy. Advanced techniques have been introduced to or emerging in almost every field of the scientific world for hundreds of years, which has been accelerated since the late 1970s when the advancement in computers and digital technologies brought the world into the Information Era. In addition to the rapid development of computational intelligence and new data fusion techniques in the past thirty years [1–4], mobile and cloud computing, grid computing driven numeric computation models, big data intelligence, and other emerging technologies have not only expanded the scope of traditional simulation and modelling in many scientific and engineering disciplines [5–8] but also enabled the fusion of traditional and contemporary methods in almost every field in the world [9–11]
Editorial Advanced Techniques for Computational and Information Sciences
New techniques in computational and information sciences have played an important role in keeping advancing the so called knowledge economy. Advanced techniques have been introduced to or emerging in almost every field of the scientific world for hundreds of years, which has been accelerated since the late 1970s when the advancement in computers and digital technologies brought the world into the Information Era. In addition to the rapid development of computational intelligence and new data fusion techniques in the past thirty years This special issue is to facilitate dissemination of recent research outcomes resulting from applying innovative and advanced techniques in computational and information sciences to various scientific and engineering disciplines. The papers included in this special issue were selected from submissions to both Mathematical Problems in Engineering (MPE) directly and the 2014 International Conference on Information Technology, Computation and Applications (ICITCA2014) held in Anyang of China in December 2014, which followed the success of ICITCA2013 Categorically, there are 15 papers in the broad area of digital audio, video, and image processing and pattern recognition. S. Zhao et al. presented a variational Bayesian superresolution approach using adaptive image prior model. All these papers have made new contributions to the broad areas of computational and information sciences
Application of artificial intelligence techniques for automated detection of myocardial infarction: A review
Myocardial infarction (MI) results in heart muscle injury due to receiving
insufficient blood flow. MI is the most common cause of mortality in
middle-aged and elderly individuals around the world. To diagnose MI,
clinicians need to interpret electrocardiography (ECG) signals, which requires
expertise and is subject to observer bias. Artificial intelligence-based
methods can be utilized to screen for or diagnose MI automatically using ECG
signals. In this work, we conducted a comprehensive assessment of artificial
intelligence-based approaches for MI detection based on ECG as well as other
biophysical signals, including machine learning (ML) and deep learning (DL)
models. The performance of traditional ML methods relies on handcrafted
features and manual selection of ECG signals, whereas DL models can automate
these tasks. The review observed that deep convolutional neural networks
(DCNNs) yielded excellent classification performance for MI diagnosis, which
explains why they have become prevalent in recent years. To our knowledge, this
is the first comprehensive survey of artificial intelligence techniques
employed for MI diagnosis using ECG and other biophysical signals.Comment: 16 pages, 8 figure
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead
Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area
Panchromatic and multispectral image fusion for remote sensing and earth observation: Concepts, taxonomy, literature review, evaluation methodologies and challenges ahead
Panchromatic and multispectral image fusion, termed pan-sharpening, is to merge the spatial and spectral information of the source images into a fused one, which has a higher spatial and spectral resolution and is more reliable for downstream tasks compared with any of the source images. It has been widely applied to image interpretation and pre-processing of various applications. A large number of methods have been proposed to achieve better fusion results by considering the spatial and spectral relationships among panchromatic and multispectral images. In recent years, the fast development of artificial intelligence (AI) and deep learning (DL) has significantly enhanced the development of pan-sharpening techniques. However, this field lacks a comprehensive overview of recent advances boosted by the rise of AI and DL. This paper provides a comprehensive review of a variety of pan-sharpening methods that adopt four different paradigms, i.e., component substitution, multiresolution analysis, degradation model, and deep neural networks. As an important aspect of pan-sharpening, the evaluation of the fused image is also outlined to present various assessment methods in terms of reduced-resolution and full-resolution quality measurement. Then, we conclude this paper by discussing the existing limitations, difficulties, and challenges of pan-sharpening techniques, datasets, and quality assessment. In addition, the survey summarizes the development trends in these areas, which provide useful methodological practices for researchers and professionals. Finally, the developments in pan-sharpening are summarized in the conclusion part. The aim of the survey is to serve as a referential starting point for newcomers and a common point of agreement around the research directions to be followed in this exciting area
Enhanced phase congruency feature-based image registration for multimodal remote sensing imagery
Multimodal image registration is an essential image processing task in remote sensing. Basically, multimodal image registration searches for optimal alignment between images captured by different sensors for the same scene to provide better visualization and more informative images. Manual image registration is a tedious task and requires more effort, hence developing an automated image registration is very crucial to provide a faster and reliable solution. However, image registration faces many challenges from the nature of remote sensing image, the environment, and the technical shortcoming of the current methods that cause three issues, namely intensive processing power, local intensity variation, and rotational distortion. Since not all image details are significant, relying on the salient features will be more efficient in terms of processing power. Thus, the feature-based registration method was adopted as an efficient method to avoid intensive processing. The proposed method resolves rotation distortion issue using Oriented FAST and Rotated BRIEF (ORB) to produce invariant rotation features. However, since it is not intensity invariant, it cannot support multimodal data. To overcome the intensity variations issue, Phase Congruence (PC) was integrated with ORB to introduce ORB-PC feature extraction to generate feature invariance to rotation distortion and local intensity variation. However, the solution is not complete since the ORB-PC matching rate is below the expectation. Enhanced ORB-PC was proposed to solve the matching issue by modifying the feature descriptor. While better feature matches were achieved, a high number of outliers from multimodal data makes the common outlier removal methods unsuccessful. Therefore, the Normalized Barycentric Coordinate System (NBCS) outlier removal was utilized to find precise matches even with a high number of outliers. The experiments were conducted to verify the registration qualitatively and quantitatively. The qualitative experiment shows the proposed method has a broader and better features distribution, while the quantitative evaluation indicates improved performance in terms of registration accuracy by 18% compared to the related works
Application of Artificial Intelligence for Surface Roughness Prediction of Additively Manufactured Components
Additive manufacturing has gained significant popularity from a manufacturing perspective due to its potential for improving production efficiency. However, ensuring consistent product quality within predetermined equipment, cost, and time constraints remains a persistent challenge. Surface roughness, a crucial quality parameter, presents difficulties in meeting the required standards, posing significant challenges in industries such as automotive, aerospace, medical devices, energy, optics, and electronics manufacturing, where surface quality directly impacts performance and functionality. As a result, researchers have given great attention to improving the quality of manufactured parts, particularly by predicting surface roughness using different parameters related to the manufactured parts. Artificial intelligence (AI) is one of the methods used by researchers to predict the surface quality of additively fabricated parts. Numerous research studies have developed models utilizing AI methods, including recent deep learning and machine learning approaches, which are effective in cost reduction and saving time, and are emerging as a promising technique. This paper presents the recent advancements in machine learning and AI deep learning techniques employed by researchers. Additionally, the paper discusses the limitations, challenges, and future directions for applying AI in surface roughness prediction for additively manufactured components. Through this review paper, it becomes evident that integrating AI methodologies holds great potential to improve the productivity and competitiveness of the additive manufacturing process. This integration minimizes the need for re-processing machined components and ensures compliance with technical specifications. By leveraging AI, the industry can enhance efficiency and overcome the challenges associated with achieving consistent product quality in additive manufacturing.publishedVersio