4 research outputs found
Intelligent Systemic/Systematic Innovation and Its Role in Delivering Improvement and Change in the Design of Mission Critical Systems
Mission critical systems (MCS) are complex nested hierarchies of systems, subsystems and components with defined purpose, characteristics, boundaries and interfaces, working in harmony to deliver vital organisational functionalities. Upgrading MCS performance is inevitable when capability enhancement is required or new technologies emerge. Improving MCS however is considered with certain degrees of reluctance due to their sensitive role in organisations and the potential disruptive impact of unexpected consequences of change. Innovation in MCS often appears in small steps that affect the entire system due to their highly interdependent structures. Effective management of innovation introduction in complex systems require systemic/systematic processes that involve process management and collective analysis, scoping, decision-making and R&D which relies on effective information sharing. This approach should run throughout the system and must include all aspects and stakeholders, utilising the skills and knowledge of all involved. This chapter describes the basic concepts and potential approaches that could be utilised to build intelligent systemic/systematic and collaborative environments for MCS innovation. Advances in ICT technologies provide an opportunity to access the wider sphere of knowledge and support the systemic innovation processes. Adopting systemic approaches increases process efficacy, leading to more reliable solutions, shorter development lead times and reduced costs
Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques
Deep learning constitutes a pivotal component within the realm of machine
learning, offering remarkable capabilities in tasks ranging from image
recognition to natural language processing. However, this very strength also
renders deep learning models susceptible to adversarial examples, a phenomenon
pervasive across a diverse array of applications. These adversarial examples
are characterized by subtle perturbations artfully injected into clean images
or videos, thereby causing deep learning algorithms to misclassify or produce
erroneous outputs. This susceptibility extends beyond the confines of digital
domains, as adversarial examples can also be strategically designed to target
human cognition, leading to the creation of deceptive media, such as deepfakes.
Deepfakes, in particular, have emerged as a potent tool to manipulate public
opinion and tarnish the reputations of public figures, underscoring the urgent
need to address the security and ethical implications associated with
adversarial examples. This article delves into the multifaceted world of
adversarial examples, elucidating the underlying principles behind their
capacity to deceive deep learning algorithms. We explore the various
manifestations of this phenomenon, from their insidious role in compromising
model reliability to their impact in shaping the contemporary landscape of
disinformation and misinformation. To illustrate progress in combating
adversarial examples, we showcase the development of a tailored Convolutional
Neural Network (CNN) designed explicitly to detect deepfakes, a pivotal step
towards enhancing model robustness in the face of adversarial threats.
Impressively, this custom CNN has achieved a precision rate of 76.2% on the
DFDC dataset
UDEEP: Edge-based Computer Vision for In-Situ Underwater Crayfish and Plastic Detection
Invasive signal crayfish have a detrimental impact on ecosystems. They spread
the fungal-type crayfish plague disease (Aphanomyces astaci) that is lethal to
the native white clawed crayfish, the only native crayfish species in Britain.
Invasive signal crayfish extensively burrow, causing habitat destruction,
erosion of river banks and adverse changes in water quality, while also
competing with native species for resources and leading to declines in native
populations. Moreover, pollution exacerbates the vulnerability of White-clawed
crayfish, with their populations declining by over 90% in certain English
counties, making them highly susceptible to extinction. To safeguard aquatic
ecosystems, it is imperative to address the challenges posed by invasive
species and discarded plastics in the United Kingdom's river ecosystem's. The
UDEEP platform can play a crucial role in environmental monitoring by
performing on-the-fly classification of Signal crayfish and plastic debris
while leveraging the efficacy of AI, IoT devices and the power of edge
computing (i.e., NJN). By providing accurate data on the presence, spread and
abundance of these species, the UDEEP platform can contribute to monitoring
efforts and aid in mitigating the spread of invasive species
Guidelines For Agentic AI Safety Volume 1: Agentic AI Safety Experts Focus Group - Sept. 2024
Welcome to this draft first volume overview of our Safer Agentic AI Foundations guidelines, a work in progress. Our Working Group of 25 experts ( see https://www.linkedin.com/groups/12966081/) is releasing these guidelines under a Creative Commons license, allowing free use and application by all and for the benefit of humanity. Our Working Group has employed a Weighted Factors Methodology to map the factors which can drive or inhibit safety in agentic systems, based on fundamental principles. We have used this same process many times previously to generate a range of global standards, certifications, and guidelines for improving ethical qualities in AI systems. We hope that this overview of the driving and inhibitory factors in agentic AI systems—those capable of independent decision-making and action—will provide a strengthened awareness of the complexities involved. These issues should be accounted for when dealing with these advanced forms of machine intelligence. We very much welcome your comments, feedback, and informal peer review. Your input will be carefully considered as we develop the final guidelines. Should you also desire further information on agentic AI and its safety, we will be pleased to accommodate your request. We expect to release the full guidelines by November 2024. You can reach us at the addresses below and keep informed of our developments via our mailing list. Thank you for your interest and engagement