24,847 research outputs found
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Sensitivity analysis for ReaxFF reparameterization using the Hilbert-Schmidt independence criterion
We apply a global sensitivity method, the Hilbert-Schmidt independence
criterion (HSIC), to the reparameterization of a Zn/S/H ReaxFF force field to
identify the most appropriate parameters for reparameterization. Parameter
selection remains a challenge in this context as high dimensional optimizations
are prone to overfitting and take a long time, but selecting too few parameters
leads to poor quality force fields. We show that the HSIC correctly and quickly
identifies the most sensitive parameters, and that optimizations done using a
small number of sensitive parameters outperform those done using a higher
dimensional reasonable-user parameter selection. Optimizations using only
sensitive parameters: 1) converge faster, 2) have loss values comparable to
those found with the naive selection, 3) have similar accuracy in validation
tests, and 4) do not suffer from problems of overfitting. We demonstrate that
an HSIC global sensitivity is a cheap optimization pre-processing step that has
both qualitative and quantitative benefits which can substantially simplify and
speedup ReaxFF reparameterizations.Comment: author accepted manuscrip
BotMoE: Twitter Bot Detection with Community-Aware Mixtures of Modal-Specific Experts
Twitter bot detection has become a crucial task in efforts to combat online
misinformation, mitigate election interference, and curb malicious propaganda.
However, advanced Twitter bots often attempt to mimic the characteristics of
genuine users through feature manipulation and disguise themselves to fit in
diverse user communities, posing challenges for existing Twitter bot detection
models. To this end, we propose BotMoE, a Twitter bot detection framework that
jointly utilizes multiple user information modalities (metadata, textual
content, network structure) to improve the detection of deceptive bots.
Furthermore, BotMoE incorporates a community-aware Mixture-of-Experts (MoE)
layer to improve domain generalization and adapt to different Twitter
communities. Specifically, BotMoE constructs modal-specific encoders for
metadata features, textual content, and graphical structure, which jointly
model Twitter users from three modal-specific perspectives. We then employ a
community-aware MoE layer to automatically assign users to different
communities and leverage the corresponding expert networks. Finally, user
representations from metadata, text, and graph perspectives are fused with an
expert fusion layer, combining all three modalities while measuring the
consistency of user information. Extensive experiments demonstrate that BotMoE
significantly advances the state-of-the-art on three Twitter bot detection
benchmarks. Studies also confirm that BotMoE captures advanced and evasive
bots, alleviates the reliance on training data, and better generalizes to new
and previously unseen user communities.Comment: Accepted at SIGIR 202
Assessing performance of artificial neural networks and re-sampling techniques for healthcare datasets.
Re-sampling methods to solve class imbalance problems have shown to improve classification accuracy by mitigating the bias introduced by differences in class size. However, it is possible that a model which uses a specific re-sampling technique prior to Artificial neural networks (ANN) training may not be suitable for aid in classifying varied datasets from the healthcare industry. Five healthcare-related datasets were used across three re-sampling conditions: under-sampling, over-sampling and combi-sampling. Within each condition, different algorithmic approaches were applied to the dataset and the results were statistically analysed for a significant difference in ANN performance. The combi-sampling condition showed that four out of the five datasets did not show significant consistency for the optimal re-sampling technique between the f1-score and Area Under the Receiver Operating Characteristic Curve performance evaluation methods. Contrarily, the over-sampling and under-sampling condition showed all five datasets put forward the same optimal algorithmic approach across performance evaluation methods. Furthermore, the optimal combi-sampling technique (under-, over-sampling and convergence point), were found to be consistent across evaluation measures in only two of the five datasets. This study exemplifies how discrete ANN performances on datasets from the same industry can occur in two ways: how the same re-sampling technique can generate varying ANN performance on different datasets, and how different re-sampling techniques can generate varying ANN performance on the same dataset
Augmented classification for electrical coil winding defects
A green revolution has accelerated over the recent decades with a look to replace existing transportation power solutions through the adoption of greener electrical alternatives. In parallel the digitisation of manufacturing has enabled progress in the tracking and traceability of processes and improvements in fault detection and classification. This paper explores electrical machine manufacture and the challenges faced in identifying failures modes during this life cycle through the demonstration of state-of-the-art machine vision methods for the classification of electrical coil winding defects. We demonstrate how recent generative adversarial networks can be used to augment training of these models to further improve their accuracy for this challenging task. Our approach utilises pre-processing and dimensionality reduction to boost performance of the model from a standard convolutional neural network (CNN) leading to a significant increase in accuracy
Classification of Alzheimer's Disease Using Convolutional Neural Networks
Brain-related diseases are among the most difficult diseases due to their sensitivity, the difficulty of performing operations, and their high costs. In contrast, the operation is not necessary to succeed, as the results of the operation may be unsuccessful. One of the most common diseases that affect the brain is Alzheimer’s disease, which affects adults, a disease that leads to memory loss and forgetting information in varying degrees. According to the condition of each patient. For these reasons, it is important to classify memory loss and to know the patient at what level and his assessment of Alzheimer's disease through CT scans of the brain. In this thesis, we review ways and techniques to use deep learning classification to classifying the Alzheimer's Disease The proposed method used to improve patient care, reduce costs, and allow fast and reliable analysis in large studies. The model will be designed using Python language for implementing the system, which is very useful for doctors, classifying the Alzheimer's Disease, was used. The model used 70% from image for training and 30% from image for validation, our trained model achieved an accuracy of 100% on a held-out test set
Interval Type-2 Beta Fuzzy Near Sets Approach to Content-Based Image Retrieval
In computer-based search systems, similarity plays a key role in replicating the human search process. Indeed, the human search process underlies many natural abilities such as image recovery, language comprehension, decision making, or pattern recognition. The search for images consists of establishing a correspondence between the available image and that sought by the user, by measuring the similarity between the images. Image search by content is generaly based on the similarity of the visual characteristics of the images. The distance function used to evaluate the similarity between images depends notonly on the criteria of the search but also on the representation of the characteristics of the image. This is the main idea of a content-based image retrieval (CBIR) system. In this article, first, we constructed type-2 beta fuzzy membership of descriptor vectors to help manage inaccuracy and uncertainty of characteristics extracted the feature of images. Subsequently, the retrieved images are ranked according to the novel similarity measure, noted type-2 fuzzy nearness measure (IT2FNM). By analogy to Type-2 Fuzzy Logic and motivated by near sets theory, we advanced a new fuzzy similarity measure (FSM) noted interval type-2 fuzzy nearness measure (IT-2 FNM). Then, we proposed three new IT-2 FSMs and we have provided mathematical justification to demonstrate that the proposed FSMs satisfy proximity properties (i.e. reflexivity, transitivity, symmetry, and overlapping). Experimental results generated using three image databases showing consistent and significant results
3D Computer Graphics and Virtual Reality
This chapter is dedicated to the description of 3D computer graphics used for the needs of virtual reality. Virtual reality (VR) is the use of computer technology to create a 3D virtual environment. The chapter presents some graphical features used in an environment as well as an explanation of good design practice. The chapter contains also the description of lighting settings, 3D objects/models and virtualization sequence, camera, and scenes where the wheelchair simulator is used as an example of the implementation environment
- …