10,384 research outputs found

    Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control

    Full text link
    This paper provides an overview of the current state-of-the-art in selective harvesting robots (SHRs) and their potential for addressing the challenges of global food production. SHRs have the potential to increase productivity, reduce labour costs, and minimise food waste by selectively harvesting only ripe fruits and vegetables. The paper discusses the main components of SHRs, including perception, grasping, cutting, motion planning, and control. It also highlights the challenges in developing SHR technologies, particularly in the areas of robot design, motion planning and control. The paper also discusses the potential benefits of integrating AI and soft robots and data-driven methods to enhance the performance and robustness of SHR systems. Finally, the paper identifies several open research questions in the field and highlights the need for further research and development efforts to advance SHR technologies to meet the challenges of global food production. Overall, this paper provides a starting point for researchers and practitioners interested in developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic

    A Design Science Research Approach to Smart and Collaborative Urban Supply Networks

    Get PDF
    Urban supply networks are facing increasing demands and challenges and thus constitute a relevant field for research and practical development. Supply chain management holds enormous potential and relevance for society and everyday life as the flow of goods and information are important economic functions. Being a heterogeneous field, the literature base of supply chain management research is difficult to manage and navigate. Disruptive digital technologies and the implementation of cross-network information analysis and sharing drive the need for new organisational and technological approaches. Practical issues are manifold and include mega trends such as digital transformation, urbanisation, and environmental awareness. A promising approach to solving these problems is the realisation of smart and collaborative supply networks. The growth of artificial intelligence applications in recent years has led to a wide range of applications in a variety of domains. However, the potential of artificial intelligence utilisation in supply chain management has not yet been fully exploited. Similarly, value creation increasingly takes place in networked value creation cycles that have become continuously more collaborative, complex, and dynamic as interactions in business processes involving information technologies have become more intense. Following a design science research approach this cumulative thesis comprises the development and discussion of four artefacts for the analysis and advancement of smart and collaborative urban supply networks. This thesis aims to highlight the potential of artificial intelligence-based supply networks, to advance data-driven inter-organisational collaboration, and to improve last mile supply network sustainability. Based on thorough machine learning and systematic literature reviews, reference and system dynamics modelling, simulation, and qualitative empirical research, the artefacts provide a valuable contribution to research and practice

    Application of fuzzy controllers in automatic ship motion control systems

    Get PDF
    Automatic ship heading control is a part of the automatic navigation system. It is charged with the task of maintaining the actual ship’s course angle or actual ship’s course without human intervention in accordance with the set course or setting parameter and maintaining this condition under the effect of disturbing influences. Thus, the corrective influence on deviations from a course can be rendered by the position of a rudder or controlling influence that leads to the rotary movement of a vessel around a vertical axis that represents a problem, which can be solved with the use of fuzzy logic. In this paper, we propose to consider the estimation of the efficiency of fuzzy controllers in systems of automatic control of ship movement, obtained by analysis of a method of the formalized record of a logic conclusion and structure of the fuzzy controller. The realization of this allows to carry out effective stabilization of a course angle of a vessel taking into account existing restrictions

    Computertomographie-basierte Bestimmung von Aortenklappenkalk und seine Assoziation mit Komplikationen nach interventioneller Aortenklappenimplantation (TAVI)

    Get PDF
    Background: Severe aortic valve calcification (AVC) has generally been recognized as a key factor in the occurrence of adverse events after transcatheter aortic valve implantation (TAVI). To date, however, a consensus on a standardized calcium detection threshold for aortic valve calcium quantification in contrast-enhanced computed tomography angiography (CTA) is still lacking. The present thesis aimed at comparing two different approaches for quantifying AVC in CTA scans based on their predictive power for adverse events and survival after a TAVI procedure.   Methods: The extensive dataset of this study included 198 characteristics for each of the 965 prospectively included patients who had undergone TAVI between November 2012 and December 2019 at the German Heart Center Berlin (DHZB). AVC quantification in CTA scans was performed at a fixed Hounsfield Unit (HU) threshold of 850 HU (HU 850 approach) and at a patient-specific threshold, where the HU threshold was set by multiplying the mean luminal attenuation of the ascending aorta by 2 (+100 % HUAorta approach). The primary endpoint of this study consisted of a combination of post-TAVI outcomes (paravalvular leak ≄ mild, implant-related conduction disturbances, 30-day mortality, post-procedural stroke, annulus rupture, and device migration). The Akaike information criterion was used to select variables for the multivariable regression model. Multivariable analysis was carried out to determine the predictive power of the investigated approaches.   Results: Multivariable analyses showed that a fixed threshold of 850 HU (calcium volume cut-off 146 mm3) was unable to predict the composite clinical endpoint post-TAVI (OR=1.13, 95 % CI 0.87 to 1.48, p=0.35). In contrast, the +100 % HUAorta approach (calcium volume cut-off 1421 mm3) enabled independent prediction of the composite clinical endpoint post-TAVI (OR=2, 95 % CI 1.52 to 2.64, p=9.2x10-7). No significant difference in the Kaplan-Meier survival analysis was observed for either of the approaches.   Conclusions: The patient-specific calcium detection threshold +100 % HUAorta is more predictive of post-TAVI adverse events included in the combined clinical endpoint than the fixed HU 850 approach. For the +100 % HUAorta approach, a calcium volume cut-off of 1421 mm3 of the aortic valve had the highest predictive value.Hintergrund: Ein wichtiger Auslöser von Komplikationen nach einer Transkatheter-Aortenklappen-Implantation (TAVI) sind ausgeprĂ€gte Kalkablagerung an der Aortenklappe. Dennoch erfolgte bisher keine Einigung auf ein standardisiertes Messverfahren zur Quantifizierung der Kalklast der Aortenklappe in einer kontrastverstĂ€rkten dynamischen computertomographischen Angiographie (CTA). Die vorliegende Dissertation untersucht, inwieweit die Wahl des Analyseverfahrens zur Quantifizierung von Kalkablagerungen in der Aortenklappe die Prognose von Komplikationen und der Überlebensdauer nach einer TAVI beeinflusst.   Methodik: Der Untersuchung liegt ein umfangreicher Datensatz von 965 Patienten mit 198 Merkmalen pro Patienten zugrunde, welche sich zwischen 2012 und 2019 am Deutschen Herzzentrum Berlin einer TAVI unterzogen haben. Die Quantifizierung der Kalkablagerung an der Aortenklappe mittels CTA wurde einerseits mit einem starren Grenzwert von 850 Hounsfield Einheiten (HU) (HU 850 Verfahren) und andererseits anhand eines individuellen Grenzwertes bemessen. Letzterer ergibt sich aus der HU-DĂ€mpfung in dem Lumen der Aorta ascendens multipliziert mit 2 (+100 % HUAorta Verfahren). Der primĂ€re klinische Endpunkt dieser Dissertation besteht aus einem aus sechs Variablen zusammengesetzten klinischen Endpunkt, welcher ungewĂŒnschte Ereignisse nach einer TAVI abbildet (paravalvulĂ€re Leckage ≄mild, Herzrhythmusstörungen nach einer TAVI, Tod innerhalb von 30 Tagen, post-TAVI Schlaganfall, Ruptur des Annulus und Prothesendislokation). Mögliche Störfaktoren, die auf das Eintreten der Komplikationen nach TAVI Einfluss haben, wurden durch den Einsatz des Akaike Informationskriterium ermittelt. Um die Vorhersagekraft von Komplikationen nach einer TAVI durch beide Verfahren zu ermitteln, wurde eine multivariate Regressionsanalyse durchgefĂŒhrt.   Ergebnisse: Die multivariaten logistischen Regressionen zeigen, dass die Messung der Kalkablagerungen anhand der HU 850 Messung (Kalklast Grenzwert von 146 mm3) die Komplikationen und die Überlebensdauer nicht vorhersagen konnten (OR=1.13, 95 % CI 0.87 bis 1.48, p=0.35). Die nach dem +100 % HUAorta Verfahren (Kalklast Grenzwert von 1421 mm3) individualisierte Kalkmessung erwies sich hingegen als sehr aussagekrĂ€ftig, da hiermit Komplikationen nach einer TAVI signifikant vorhergesagt werden konnten (OR=2, 95 % CI 1.52 bis 2.64, p=9.2x10-7). In Hinblick auf die postoperative Kaplan-Meier Überlebenszeitanalyse kann auch mit dem +100 % HUAorta Verfahren keine Vorhersage getroffen werden.   Fazit: Aus der Dissertation ergibt sich die Empfehlung, die Messung von Kalkablagerungen nach dem +100 % HUAorta Verfahren vorzunehmen, da Komplikationen wesentlich besser und zuverlĂ€ssiger als nach der gĂ€ngigen HU 850 Messmethode vorhergesagt werden können. FĂŒr das +100 % HUAorta Verfahren lag der optimale Kalklast Grenzwert bei 1421 mm3

    Machine Learning Research Trends in Africa: A 30 Years Overview with Bibliometric Analysis Review

    Full text link
    In this paper, a critical bibliometric analysis study is conducted, coupled with an extensive literature survey on recent developments and associated applications in machine learning research with a perspective on Africa. The presented bibliometric analysis study consists of 2761 machine learning-related documents, of which 98% were articles with at least 482 citations published in 903 journals during the past 30 years. Furthermore, the collated documents were retrieved from the Science Citation Index EXPANDED, comprising research publications from 54 African countries between 1993 and 2021. The bibliometric study shows the visualization of the current landscape and future trends in machine learning research and its application to facilitate future collaborative research and knowledge exchange among authors from different research institutions scattered across the African continent

    Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality

    Get PDF
    Ineffective traffic signal control is one of the major causes of congestion in urban road networks. Dynamically changing traffic conditions and live traffic state estimation are fundamental challenges that limit the ability of the existing signal infrastructure in rendering individualized signal control in real-time. We use deep reinforcement learning (DRL) to address these challenges. Due to economic and safety constraints associated training such agents in the real world, a practical approach is to do so in simulation before deployment. Domain randomisation is an effective technique for bridging the reality gap and ensuring effective transfer of simulation-trained agents to the real world. In this paper, we develop a fully-autonomous, vision-based DRL agent that achieve adaptive signal control in the face of complex, imprecise, and dynamic traffic environments. Our agent uses live visual data (i.e. a stream of real-time RGB footage) from an intersection to extensively perceive and subsequently act upon the traffic environment. Employing domain randomisation, we examine our agent’s generalisation capabilities under varying traffic conditions in both the simulation and the real-world environments. In a diverse validation set independent of training data, our traffic control agent reliably adapted to novel traffic situations and demonstrated a positive transfer to previously unseen real intersections despite being trained entirely in simulation

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    On the Mechanism of Building Core Competencies: a Study of Chinese Multinational Port Enterprises

    Get PDF
    This study aims to explore how Chinese multinational port enterprises (MNPEs) build their core competencies. Core competencies are firms’special capabilities and sources to gain sustainable competitive advantage (SCA) in marketplace, and the concept led to extensive research and debates. However, few studies include inquiries about the mechanisms of building core competencies in the context of Chinese MNPEs. Accordingly, answers were sought to three research questions: 1. What are the core competencies of the Chinese MNPEs? 2. What are the mechanisms that the Chinese MNPEs use to build their core competencies? 3. What are the paths that the Chinese MNPEs pursue to build their resources bases? The study adopted a multiple-case study design, focusing on building mechanism of core competencies with RBV. It selected purposively five Chinese leading MNPEs and three industry associations as Case Companies. The study revealed three main findings. First, it identified three generic core competencies possessed by Case Companies, i.e., innovation in business models and operations, utilisation of technologies, and acquisition of strategic resources. Second, it developed the conceptual framework of the Mechanism of Building Core Competencies (MBCC), which is a process of change of collective learning in effective and efficient utilization of resources of a firm in response to critical events. Third, it proposed three paths to build core competencies, i.e., enhancing collective learning, selecting sustainable processes, and building resource base. The study contributes to the knowledge of core competencies and RBV in three ways: (1) presenting three generic core competencies of the Chinese MNPEs, (2) proposing a new conceptual framework to explain how Chinese MNPEs build their core competencies, (3) suggesting a solid anchor point (MBCC) to explain the links among resources, core competencies, and SCA. The findings set benchmarks for Chinese logistics industry and provide guidelines to build core competencies

    Image classification over unknown and anomalous domains

    Get PDF
    A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting. Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each. While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so. In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
    • 

    corecore