18,935 research outputs found
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
Concept Graph Neural Networks for Surgical Video Understanding
We constantly integrate our knowledge and understanding of the world to
enhance our interpretation of what we see.
This ability is crucial in application domains which entail reasoning about
multiple entities and concepts, such as AI-augmented surgery. In this paper, we
propose a novel way of integrating conceptual knowledge into temporal analysis
tasks via temporal concept graph networks. In the proposed networks, a global
knowledge graph is incorporated into the temporal analysis of surgical
instances, learning the meaning of concepts and relations as they apply to the
data. We demonstrate our results in surgical video data for tasks such as
verification of critical view of safety, as well as estimation of Parkland
grading scale. The results show that our method improves the recognition and
detection of complex benchmarks as well as enables other analytic applications
of interest
A Design Science Research Approach to Smart and Collaborative Urban Supply Networks
Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness.
A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense.
Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice
Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model
We consider dynamic pricing strategies in a streamed longitudinal data set-up
where the objective is to maximize, over time, the cumulative profit across a
large number of customer segments. We consider a dynamic probit model with the
consumers' preferences as well as price sensitivity varying over time. Building
on the well-known finding that consumers sharing similar characteristics act in
similar ways, we consider a global shrinkage structure, which assumes that the
consumers' preferences across the different segments can be well approximated
by a spatial autoregressive (SAR) model. In such a streamed longitudinal
set-up, we measure the performance of a dynamic pricing policy via regret,
which is the expected revenue loss compared to a clairvoyant that knows the
sequence of model parameters in advance. We propose a pricing policy based on
penalized stochastic gradient descent (PSGD) and explicitly characterize its
regret as functions of time, the temporal variability in the model parameters
as well as the strength of the auto-correlation network structure spanning the
varied customer segments. Our regret analysis results not only demonstrate
asymptotic optimality of the proposed policy but also show that for policy
planning it is essential to incorporate available structural information as
policies based on unshrunken models are highly sub-optimal in the
aforementioned set-up.Comment: 34 pages, 5 figure
Perfect is the enemy of test oracle
Automation of test oracles is one of the most challenging facets of software
testing, but remains comparatively less addressed compared to automated test
input generation. Test oracles rely on a ground-truth that can distinguish
between the correct and buggy behavior to determine whether a test fails
(detects a bug) or passes. What makes the oracle problem challenging and
undecidable is the assumption that the ground-truth should know the exact
expected, correct, or buggy behavior. However, we argue that one can still
build an accurate oracle without knowing the exact correct or buggy behavior,
but how these two might differ. This paper presents SEER, a learning-based
approach that in the absence of test assertions or other types of oracle, can
determine whether a unit test passes or fails on a given method under test
(MUT). To build the ground-truth, SEER jointly embeds unit tests and the
implementation of MUTs into a unified vector space, in such a way that the
neural representation of tests are similar to that of MUTs they pass on them,
but dissimilar to MUTs they fail on them. The classifier built on top of this
vector representation serves as the oracle to generate "fail" labels, when test
inputs detect a bug in MUT or "pass" labels, otherwise. Our extensive
experiments on applying SEER to more than 5K unit tests from a diverse set of
open-source Java projects show that the produced oracle is (1) effective in
predicting the fail or pass labels, achieving an overall accuracy, precision,
recall, and F1 measure of 93%, 86%, 94%, and 90%, (2) generalizable, predicting
the labels for the unit test of projects that were not in training or
validation set with negligible performance drop, and (3) efficient, detecting
the existence of bugs in only 6.5 milliseconds on average.Comment: Published in ESEC/FSE 202
Deep Learning for Scene Flow Estimation on Point Clouds: A Survey and Prospective Trends
Aiming at obtaining structural information and 3D motion of dynamic scenes, scene flow estimation has been an interest of research in computer vision and computer graphics for a long time. It is also a fundamental task for various applications such as autonomous driving. Compared to previous methods that utilize image representations, many recent researches build upon the power of deep analysis and focus on point clouds representation to conduct 3D flow estimation. This paper comprehensively reviews the pioneering literature in scene flow estimation based on point clouds. Meanwhile, it delves into detail in learning paradigms and presents insightful comparisons between the state-of-the-art methods using deep learning for scene flow estimation. Furthermore, this paper investigates various higher-level scene understanding tasks, including object tracking, motion segmentation, etc. and concludes with an overview of foreseeable research trends for scene flow estimation
Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review
Globally, the external Internet is increasingly being connected to the
contemporary industrial control system. As a result, there is an immediate need
to protect the network from several threats. The key infrastructure of
industrial activity may be protected from harm by using an intrusion detection
system (IDS), a preventive measure mechanism, to recognize new kinds of
dangerous threats and hostile activities. The most recent artificial
intelligence (AI) techniques used to create IDS in many kinds of industrial
control networks are examined in this study, with a particular emphasis on
IDS-based deep transfer learning (DTL). This latter can be seen as a type of
information fusion that merge, and/or adapt knowledge from multiple domains to
enhance the performance of the target task, particularly when the labeled data
in the target domain is scarce. Publications issued after 2015 were taken into
account. These selected publications were divided into three categories:
DTL-only and IDS-only are involved in the introduction and background, and
DTL-based IDS papers are involved in the core papers of this review.
Researchers will be able to have a better grasp of the current state of DTL
approaches used in IDS in many different types of networks by reading this
review paper. Other useful information, such as the datasets used, the sort of
DTL employed, the pre-trained network, IDS techniques, the evaluation metrics
including accuracy/F-score and false alarm rate (FAR), and the improvement
gained, were also covered. The algorithms, and methods used in several studies,
or illustrate deeply and clearly the principle in any DTL-based IDS subcategory
are presented to the reader
A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms
Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data.
A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability.
To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity.
A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case.
The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change.
The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
Développement d’un système intelligent de reconnaissance automatisée pour la caractérisation des états de surface de la chaussée en temps réel par une approche multicapteurs
Le rôle d’un service dédié à l’analyse de la météo routière est d’émettre des prévisions et des avertissements aux usagers quant à l’état de la chaussée, permettant ainsi d’anticiper les conditions de circulations dangereuses, notamment en période hivernale. Il est donc important de définir l’état de chaussée en tout temps. L’objectif de ce projet est donc de développer un système de détection multicapteurs automatisée pour la caractérisation en temps réel des états de surface de la chaussée (neige, glace, humide, sec). Ce mémoire se focalise donc sur le développement d’une méthode de fusion de données images et sons par apprentissage profond basée sur la théorie de Dempster-Shafer. Les mesures directes pour l’acquisition des données qui ont servi à l’entrainement du modèle de fusion ont été effectuées à l’aide de deux capteurs à faible coût disponibles dans le commerce. Le premier capteur est une caméra pour enregistrer des vidéos de la surface de la route. Le second capteur est un microphone pour enregistrer le bruit de l’interaction pneu-chaussée qui caractérise chaque état de surface. La finalité de ce système est de pouvoir fonctionner sur un nano-ordinateur pour l’acquisition, le traitement et la diffusion de l’information en temps réel afin d’avertir les services d’entretien routier ainsi que les usagers de la route. De façon précise, le système se présente comme suit :1) une architecture d’apprentissage profond classifiant chaque état de surface à partir des images issues de la vidéo sous forme de probabilités ; 2) une architecture d’apprentissage profond classifiant chaque état de surface à partir du son sous forme de probabilités ; 3) les probabilités issues de chaque architecture ont été ensuite introduites dans le modèle de fusion pour obtenir la décision finale. Afin que le système soit léger et moins coûteux, il a été développé à partir d’architectures alliant légèreté et précision à savoir Squeeznet pour les images et M5 pour le son. Lors de la validation, le système a démontré une bonne performance pour la détection des états surface avec notamment 87,9 % pour la glace noire et 97 % pour la neige fondante
In vitro investigation of the effect of disulfiram on hypoxia induced NFκB, epithelial to mesenchymal transition and cancer stem cells in glioblastoma cell lines
A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Glioblastoma multiforme (GBM) is one of the most aggressive and lethal cancers with a poor prognosis. Advances in the treatment of GBM are limited due to several resistance mechanisms and limited drug delivery into the central nervous system (CNS) compartment by the blood-brain barrier (BBB) and by actions of the normal brain to counteract tumour-targeting medications. Hypoxia is common in malignant brain tumours such as GBM and plays a significant role in tumour pathobiology. It is widely accepted that hypoxia is a major driver of GBM malignancy. Although it has been confirmed that hypoxia induces GBM stem-like-cells (GSCs), which are highly invasive and resistant to all chemotherapeutic agents, the detailed molecular pathways linking hypoxia, GSC traits and chemoresistance remain obscure. Evidence shows that hypoxia induces cancer stem cell phenotypes via epithelial-to-mesenchymal transition (EMT), promoting therapeutic resistance in most cancers, including GBM.
This study demonstrated that spheroid cultured GBM cells consist of a large population of hypoxic cells with CSC and EMT characteristics. GSCs are chemo-resistant and displayed increased levels of HIFs and NFκB activity. Similarly, the hypoxia cultured GBM cells manifested GSC traits, chemoresistance and invasiveness. These results suggest that hypoxia is responsible for GBM stemness, chemoresistance and invasiveness. GBM cells transfected with nuclear factor kappa B-p65 (NFκB-p65) subunit exhibited CSC and EMT markers indicating the essential role of NFκB in maintaining GSC phenotypes. The study also highlighted the significance of NFκB in driving chemoresistance, invasiveness, and the potential role of NFκB as the central regulator of hypoxia-induced stemness in GBM cells. GSC population has the ability of self-renewal, cancer initiation and development of secondary heterogeneous cancer. The very poor prognosis of GBM could largely be attributed to the existence of GSCs, which promote tumour propagation, maintenance, radio- and chemoresistance and local infiltration.
In this study, we used Disulfiram (DS), a drug used for more than 65 years in alcoholism clinics, in combination with copper (Cu) to target the NFκB pathway, reverse chemoresistance and block invasion in GSCs. The obtained results showed that DS/Cu is highly cytotoxic to GBM cells and completely eradicated the resistant CSC population at low dose levels in vitro. DS/Cu inhibited the migration and invasion of hypoxia-induced CSC and EMT like GBM cells at low nanomolar concentrations.
DS is an FDA approved drug with low toxicity to normal tissues and can pass through the BBB. Further research may lead to the quick translation of DS into cancer clinics and provide new therapeutic options to improve treatment outcomes in GBM patients
- …