952 research outputs found
A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection
The broadening dependency and reliance that modern societies have on essential services
provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical
Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just
at the economic level but also in terms of physical damage and even loss of human life. Complementing
traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring
Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are
in place and compliant with standards and internal policies. Forensics assist the investigation of past security
incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can
be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the
latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing
in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of
tackling the requirements imposed by massively distributed and complex Industrial Automation and Control
Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and
redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced
a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the
collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic
template for a converged platform. These results are intended to guide future research on forensics and
compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio
Multidisciplinary perspectives on Artificial Intelligence and the law
This open access book presents an interdisciplinary, multi-authored, edited collection of chapters on Artificial Intelligence (‘AI’) and the Law. AI technology has come to play a central role in the modern data economy. Through a combination of increased computing power, the growing availability of data and the advancement of algorithms, AI has now become an umbrella term for some of the most transformational technological breakthroughs of this age. The importance of AI stems from both the opportunities that it offers and the challenges that it entails. While AI applications hold the promise of economic growth and efficiency gains, they also create significant risks and uncertainty. The potential and perils of AI have thus come to dominate modern discussions of technology and ethics – and although AI was initially allowed to largely develop without guidelines or rules, few would deny that the law is set to play a fundamental role in shaping the future of AI. As the debate over AI is far from over, the need for rigorous analysis has never been greater. This book thus brings together contributors from different fields and backgrounds to explore how the law might provide answers to some of the most pressing questions raised by AI. An outcome of the Católica Research Centre for the Future of Law and its interdisciplinary working group on Law and Artificial Intelligence, it includes contributions by leading scholars in the fields of technology, ethics and the law.info:eu-repo/semantics/publishedVersio
Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation
Trustworthy Artificial Intelligence (AI) is based on seven technical
requirements sustained over three main pillars that should be met throughout
the system's entire life cycle: it should be (1) lawful, (2) ethical, and (3)
robust, both from a technical and a social perspective. However, attaining
truly trustworthy AI concerns a wider vision that comprises the trustworthiness
of all processes and actors that are part of the system's life cycle, and
considers previous aspects from different lenses. A more holistic vision
contemplates four essential axes: the global principles for ethical use and
development of AI-based systems, a philosophical take on AI ethics, a
risk-based approach to AI regulation, and the mentioned pillars and
requirements. The seven requirements (human agency and oversight; robustness
and safety; privacy and data governance; transparency; diversity,
non-discrimination and fairness; societal and environmental wellbeing; and
accountability) are analyzed from a triple perspective: What each requirement
for trustworthy AI is, Why it is needed, and How each requirement can be
implemented in practice. On the other hand, a practical approach to implement
trustworthy AI systems allows defining the concept of responsibility of
AI-based systems facing the law, through a given auditing process. Therefore, a
responsible AI system is the resulting notion we introduce in this work, and a
concept of utmost necessity that can be realized through auditing processes,
subject to the challenges posed by the use of regulatory sandboxes. Our
multidisciplinary vision of trustworthy AI culminates in a debate on the
diverging views published lately about the future of AI. Our reflections in
this matter conclude that regulation is a key for reaching a consensus among
these views, and that trustworthy and responsible AI systems will be crucial
for the present and future of our society.Comment: 30 pages, 5 figures, under second revie
Efficient Deep Learning for Real-time Classification of Astronomical Transients
A new golden age in astronomy is upon us, dominated by data. Large astronomical surveys are broadcasting unprecedented rates of information, demanding machine learning as a critical component in modern scientific pipelines to handle the deluge of data. The upcoming Legacy Survey of Space and Time (LSST) of the Vera C. Rubin Observatory will raise the big-data bar for time- domain astronomy, with an expected 10 million alerts per-night, and generating many petabytes of data over the lifetime of the survey. Fast and efficient classification algorithms that can operate in real-time, yet robustly and accurately, are needed for time-critical events where additional resources can be sought for follow-up analyses. In order to handle such data, state-of-the-art deep learning architectures coupled with tools that leverage modern hardware accelerators are essential.
The work contained in this thesis seeks to address the big-data challenges of LSST by proposing novel efficient deep learning architectures for multivariate time-series classification that can provide state-of-the-art classification of astronomical transients at a fraction of the computational costs of other deep learning approaches. This thesis introduces the depthwise-separable convolution and the notion of convolutional embeddings to the task of time-series classification for gains in classification performance that are achieved with far fewer model parameters than similar methods. It also introduces the attention mechanism to time-series classification that improves performance even further still, with significant improvement in computational efficiency, as well as further reduction in model size. Finally, this thesis pioneers the use of modern model compression techniques to the field of photometric classification for efficient deep learning deployment. These insights informed the final architecture which was deployed in a live production machine learning system, demonstrating the capability to operate efficiently and robustly in real-time, at LSST scale and beyond, ready for the new era of data intensive astronomy
An empirical investigation of the relationship between integration, dynamic capabilities and performance in supply chains
This research aimed to develop an empirical understanding of the relationships between integration,
dynamic capabilities and performance in the supply chain domain, based on which, two conceptual
frameworks were constructed to advance the field. The core motivation for the research was that, at
the stage of writing the thesis, the combined relationship between the three concepts had not yet
been examined, although their interrelationships have been studied individually.
To achieve this aim, deductive and inductive reasoning logics were utilised to guide the qualitative
study, which was undertaken via multiple case studies to investigate lines of enquiry that would
address the research questions formulated. This is consistent with the author’s philosophical
adoption of the ontology of relativism and the epistemology of constructionism, which was considered
appropriate to address the research questions. Empirical data and evidence were collected, and
various triangulation techniques were employed to ensure their credibility. Some key features of
grounded theory coding techniques were drawn upon for data coding and analysis, generating two
levels of findings. These revealed that whilst integration and dynamic capabilities were crucial in
improving performance, the performance also informed the former. This reflects a cyclical and
iterative approach rather than one purely based on linearity. Adopting a holistic approach towards
the relationship was key in producing complementary strategies that can deliver sustainable supply
chain performance.
The research makes theoretical, methodological and practical contributions to the field of supply
chain management. The theoretical contribution includes the development of two emerging
conceptual frameworks at the micro and macro levels. The former provides greater specificity, as it
allows meta-analytic evaluation of the three concepts and their dimensions, providing a detailed
insight into their correlations. The latter gives a holistic view of their relationships and how they are
connected, reflecting a middle-range theory that bridges theory and practice. The methodological
contribution lies in presenting models that address gaps associated with the inconsistent use of
terminologies in philosophical assumptions, and lack of rigor in deploying case study research
methods. In terms of its practical contribution, this research offers insights that practitioners could
adopt to enhance their performance. They can do so without necessarily having to forgo certain
desired outcomes using targeted integrative strategies and drawing on their dynamic capabilities
Distributed Sensing, Computing, Communication, and Control Fabric: A Unified Service-Level Architecture for 6G
With the advent of the multimodal immersive communication system, people can
interact with each other using multiple devices for sensing, communication
and/or control either onsite or remotely. As a breakthrough concept, a
distributed sensing, computing, communications, and control (DS3C) fabric is
introduced in this paper for provisioning 6G services in multi-tenant
environments in a unified manner. The DS3C fabric can be further enhanced by
natively incorporating intelligent algorithms for network automation and
managing networking, computing, and sensing resources efficiently to serve
vertical use cases with extreme and/or conflicting requirements. As such, the
paper proposes a novel end-to-end 6G system architecture with enhanced
intelligence spanning across different network, computing, and business
domains, identifies vertical use cases and presents an overview of the relevant
standardization and pre-standardization landscape
Artificial Intelligence and International Conflict in Cyberspace
This edited volume explores how artificial intelligence (AI) is transforming international conflict in cyberspace. Over the past three decades, cyberspace developed into a crucial frontier and issue of international conflict. However, scholarly work on the relationship between AI and conflict in cyberspace has been produced along somewhat rigid disciplinary boundaries and an even more rigid sociotechnical divide – wherein technical and social scholarship are seldomly brought into a conversation. This is the first volume to address these themes through a comprehensive and cross-disciplinary approach. With the intent of exploring the question ‘what is at stake with the use of automation in international conflict in cyberspace through AI?’, the chapters in the volume focus on three broad themes, namely: (1) technical and operational, (2) strategic and geopolitical and (3) normative and legal. These also constitute the three parts in which the chapters of this volume are organised, although these thematic sections should not be considered as an analytical or a disciplinary demarcation
Industry 4.0 for SME
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Business AnalyticsIndustry 4.0 has been growing within companies and impacting the economy and society, but this has been a more complex challenge for some types of companies. Due to the costs and complexity associated with Industry 4.0 technologies, small and medium enterprises face difficulties in adopting them.
This thesis proposes to create a model that gives guidance and simplifies how to implement Industry 4.0 in SMEs with a low-cost perspective. It is intended that this model can be used as a blueprint to design and implement an Industry 4.0 project within a manufactory SME.
To create the model, a literature review of the different fields regarding Industry 4.0 were conducted to understand the most suited technologies to leverage within the manufacturing industry and the different use cases where these would be applicable. After the model was built, expert interviews were conducted, and based on the received feedback, the model was tweaked, improved, and validated
- …