225 research outputs found
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
Facial Micro- and Macro-Expression Spotting and Generation Methods
Facial micro-expression (ME) recognition requires face movement interval as input, but computer methods in spotting ME are still underperformed. This is due to lacking large-scale long video dataset and ME generation methods are in their infancy. This thesis presents methods to address data deficiency issues and introduces a new method for spotting macro- and micro-expressions simultaneously.
This thesis introduces SAMM Long Videos (SAMM-LV), which contains 147 annotated long videos, and develops a baseline method to facilitate ME Grand Challenge 2020. Further, a reference-guided style transfer of StarGANv2 is experimented on SAMM-LV to generate a synthetic dataset, namely SAMM-SYNTH. The quality of SAMM-SYNTH is evaluated by using facial action units detected by OpenFace. Quantitative measurement shows high correlations on two Action Units (AU12 and AU6) of the original and synthetic data.
In facial expression spotting, a two-stream 3D-Convolutional Neural Network with temporal oriented frame skips that can spot micro- and macro-expression simultaneously is proposed. This method achieves state-of-the-art performance in SAMM-LV and is competitive in CAS(ME)2, it was used as the baseline result of ME Grand Challenge 2021. The F1-score improves to 0.1036 when trained with composite data consisting of SAMM-LV and SAMMSYNTH. On the unseen ME Grand Challenge 2022 evaluation dataset, it achieves F1-score of 0.1531.
Finally, a new sequence generation method to explore the capability of deep learning network is proposed. It generates spontaneous facial expressions by using only two input sequences without any labels. SSIM and NIQE were used for image quality analysis and the generated data achieved 0.87 and 23.14. By visualising the movements using optical flow value and absolute frame differences, this method demonstrates its potential in generating subtle ME. For realism evaluation, the generated videos were rated by using two facial expression recognition networks
2021-2022, University of Memphis bulletin
University of Memphis bulletin containing the graduate catalog for 2021-2022.https://digitalcommons.memphis.edu/speccoll-ua-pub-bulletins/1441/thumbnail.jp
The extent of Kuwaiti Islamic banks restrict the use of Islamic financing tools in their financial operations: a field study
This research aims to identify the extent to of Kuwaiti Islamic banks adhere to the use of Islamic financing tools in their financial operations. The study population consists of all (5) banks listed on the Kuwait Stock Exchange. As for the study
sample, (100) respondents were selected from Financial managers, accountants, and workers in finance and investment departments work in these banks. The questionnaire was used as a tool for collecting primary data. The results showed that Kuwaiti Islamic banks adhere to the use of Islamic financing tools represented in Murabaha,
Musharaka and Mudaraba in their financial operations to a high degree. The study recommended that Kuwaiti Islamic banks should be encouraged to play a more role in Murabaha operations and find appropriate solutions to technical obstacles and culture-related procedures that prevent the provision of Islamic financing through Murabaha
Visual and Camera Sensors
This book includes 13 papers published in Special Issue ("Visual and Camera Sensors") of the journal Sensors. The goal of this Special Issue was to invite high-quality, state-of-the-art research papers dealing with challenging issues in visual and camera sensors
Deep generative models for medical image synthesis and strategies to utilise them
Medical imaging has revolutionised the diagnosis and treatments of diseases since the first
medical image was taken using X-rays in 1895. As medical imaging became an essential tool
in a modern healthcare system, more medical imaging techniques have been invented, such
as Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Computed
Tomography (CT), Ultrasound, etc. With the advance of medical imaging techniques, the
demand for processing and analysing these complex medical images is increasing rapidly.
Efforts have been put on developing approaches that can automatically analyse medical images. With the recent success of deep learning (DL) in computer vision, researchers have
applied and proposed many DL-based methods in the field of medical image analysis. However, one problem with data-driven DL-based methods is the lack of data. Unlike natural
images, medical images are more expensive to acquire and label. One way to alleviate the
lack of medical data is medical image synthesis.
In this thesis, I first start with pseudo healthy synthesis, which is to create a ‘healthy’ looking
medical image from a pathological one. The synthesised pseudo healthy images can be used
for the detection of pathology, segmentation, etc. Several challenges exist with this task. The
first challenge is the lack of ground-truth data, as a subject cannot be healthy and diseased at
the same time. The second challenge is how to evaluate the generated images. In this thesis,
I propose a deep learning method to learn to generate pseudo healthy images with adversarial
and cycle consistency losses to overcome the lack of ground-truth data. I also propose several
metrics to evaluate the quality of synthetic ‘healthy’ images. Pseudo healthy synthesis can be
viewed as transforming images between discrete domains, e.g. from pathological domain to
healthy domain. However, there are some changes in medical data that are continuous, e.g.
brain ageing progression.
Brain changes as age increases. With the ageing global population, research on brain ageing
has attracted increasing attention. In this thesis, I propose a deep learning method that can
simulate such brain ageing progression. Specifically, longitudinal brain data are not easy to
acquire; if some exist, they only cover several years. Thus, the proposed method focuses on
learning subject-specific brain ageing progression without training on longitudinal data. As
there are other factors, such as neurodegenerative diseases, that can affect brain ageing, the
proposed model also considers health status, i.e. the existence of Alzheimer’s Disease (AD).
Furthermore, to evaluate the quality of synthetic aged images, I define several metrics and
conducted a series of experiments.
Suppose we have a pre-trained deep generative model and a downstream tasks model, say
a classifier. One question is how to make the best of the generative model to improve the
performance of the classifier. In this thesis, I propose a simple procedure that can discover
the ‘weakness’ of the classifier and guide the generator to synthesise counterfactuals (synthetic
data) that are hard for the classifier. The proposed procedure constructs an adversarial
game between generative factors of the generator and the classifier. We demonstrate the effectiveness
of this proposed procedure through a series of experiments. Furthermore, we
consider the application of generative models in a continual learning context and investigate
the usefulness of them to alleviate spurious correlation.
This thesis creates new avenues for further research in the area of medical image synthesis
and how to utilise the medical generative models, which we believe could be important for
future studies in medical image analysis with deep learning
- …