21,422 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era
OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is
demonstrated to be one small step for generative AI (GAI), but one giant leap
for artificial general intelligence (AGI). Since its official release in
November 2022, ChatGPT has quickly attracted numerous users with extensive
media coverage. Such unprecedented attention has also motivated numerous
researchers to investigate ChatGPT from various aspects. According to Google
scholar, there are more than 500 articles with ChatGPT in their titles or
mentioning it in their abstracts. Considering this, a review is urgently
needed, and our work fills this gap. Overall, this work is the first to survey
ChatGPT with a comprehensive review of its underlying technology, applications,
and challenges. Moreover, we present an outlook on how ChatGPT might evolve to
realize general-purpose AIGC (a.k.a. AI-generated content), which will be a
significant milestone for the development of AGI.Comment: A Survey on ChatGPT and GPT-4, 29 pages. Feedback is appreciated
([email protected]
Exploiting Symmetry and Heuristic Demonstrations in Off-policy Reinforcement Learning for Robotic Manipulation
Reinforcement learning demonstrates significant potential in automatically
building control policies in numerous domains, but shows low efficiency when
applied to robot manipulation tasks due to the curse of dimensionality. To
facilitate the learning of such tasks, prior knowledge or heuristics that
incorporate inherent simplification can effectively improve the learning
performance. This paper aims to define and incorporate the natural symmetry
present in physical robotic environments. Then, sample-efficient policies are
trained by exploiting the expert demonstrations in symmetrical environments
through an amalgamation of reinforcement and behavior cloning, which gives the
off-policy learning process a diverse yet compact initiation. Furthermore, it
presents a rigorous framework for a recent concept and explores its scope for
robot manipulation tasks. The proposed method is validated via two
point-to-point reaching tasks of an industrial arm, with and without an
obstacle, in a simulation experiment study. A PID controller, which tracks the
linear joint-space trajectories with hard-coded temporal logic to produce
interim midpoints, is used to generate demonstrations in the study. The results
of the study present the effect of the number of demonstrations and quantify
the magnitude of behavior cloning to exemplify the possible improvement of
model-free reinforcement learning in common manipulation tasks. A comparison
study between the proposed method and a traditional off-policy reinforcement
learning algorithm indicates its advantage in learning performance and
potential value for applications
Passive Radio Frequency-based 3D Indoor Positioning System via Ensemble Learning
Passive radio frequency (PRF)-based indoor positioning systems (IPS) have
attracted researchers' attention due to their low price, easy and customizable
configuration, and non-invasive design. This paper proposes a PRF-based
three-dimensional (3D) indoor positioning system (PIPS), which is able to use
signals of opportunity (SoOP) for positioning and also capture a scenario
signature. PIPS passively monitors SoOPs containing scenario signatures through
a single receiver. Moreover, PIPS leverages the Dynamic Data Driven
Applications System (DDDAS) framework to devise and customize the sampling
frequency, enabling the system to use the most impacted frequency band as the
rated frequency band. Various regression methods within three ensemble learning
strategies are used to train and predict the receiver position. The PRF
spectrum of 60 positions is collected in the experimental scenario, and three
criteria are applied to evaluate the performance of PIPS. Experimental results
show that the proposed PIPS possesses the advantages of high accuracy,
configurability, and robustness.Comment: DDDAS 202
TransFusionOdom: Interpretable Transformer-based LiDAR-Inertial Fusion Odometry Estimation
Multi-modal fusion of sensors is a commonly used approach to enhance the
performance of odometry estimation, which is also a fundamental module for
mobile robots. However, the question of \textit{how to perform fusion among
different modalities in a supervised sensor fusion odometry estimation task?}
is still one of challenging issues remains. Some simple operations, such as
element-wise summation and concatenation, are not capable of assigning adaptive
attentional weights to incorporate different modalities efficiently, which make
it difficult to achieve competitive odometry results. Recently, the Transformer
architecture has shown potential for multi-modal fusion tasks, particularly in
the domains of vision with language. In this work, we propose an end-to-end
supervised Transformer-based LiDAR-Inertial fusion framework (namely
TransFusionOdom) for odometry estimation. The multi-attention fusion module
demonstrates different fusion approaches for homogeneous and heterogeneous
modalities to address the overfitting problem that can arise from blindly
increasing the complexity of the model. Additionally, to interpret the learning
process of the Transformer-based multi-modal interactions, a general
visualization approach is introduced to illustrate the interactions between
modalities. Moreover, exhaustive ablation studies evaluate different
multi-modal fusion strategies to verify the performance of the proposed fusion
strategy. A synthetic multi-modal dataset is made public to validate the
generalization ability of the proposed fusion strategy, which also works for
other combinations of different modalities. The quantitative and qualitative
odometry evaluations on the KITTI dataset verify the proposed TransFusionOdom
could achieve superior performance compared with other related works.Comment: Submitted to IEEE Sensors Journal with some modifications. This work
has been submitted to the IEEE for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessibl
Arts and humanities shaping the AI future
The organisation of this event was motivated by the view there should be more Arts and Humanities (A&H) perspectives, methods and approaches involved in shaping our future relationship with AI technology. Our invitation was sent to the most diverse group we could imagine being interested in this view. Positive responses to the invitation, rich discussions during and critical reflections after the meeting in general confirms this view. Besides facilitating a discussion amongst this group of participants from different disciplines, the event was not outcome-driven. Some information as well as questions were gathered before the meeting. At the meeting, example projects using A&H methods to shape relationships with AI technology were presented as triggers for small group discussions to follow. Note takers collected and summarised discussion highlights at the end of the day, and invitations for post-meeting follow up reflections were sent. This report provides a relatively detailed account of these activities, the conditions and what was shared. Writing this has been useful for considering what might come next, which we are currently reflecting on. Please feel free to contact us with any thoughts or questions
Robotic Bronchoscopy: Review of Three Systems
Robotic bronchoscopy (RB) has been shown to improve access to smaller and more peripheral lung lesions, while simultaneously staging the mediastinum. Pre-clinical studies demonstrated extremely high diagnostic yields, but real-world RB yields have yet to fully matched up in prospective studies. Despite this, RB technology has rapidly evolved and has great potential for lung-cancer diagnosis and even treatment. In this article, we review the historical and present challenges with RB in order to compare three RB systems
Artificial Minds
This paper explores the artistic possibilities of artificial intelligence, as well as its ability to act as a creative being through its learned knowledge from the collective consciousness of human beings, whether this learned knowledge can be used by the AI to represent reality, and whether this can be problematic regarding learned biases from the preexisting ones of our own. Looking at the history of how far artificial intelligence has come within the creative artistic realm, examining the technical aspects of how exactly an AI is able to generate original art, and examining four artists that all collaborate with artificially intelligent computer system in very diverse and unique ways, whether through video art, physical pencil drawings, or GAN generated imagery to create original works of art, the paperinvestigates whether the resulting artworks can be considered creative productions, whether AI can be taught artistic skills, whether these artistic skills can be implemented in representations of reality, and whether the AI can potentially inherit human biases in the process
Data-driven Grip Force Variation in Robot-Human Handovers
Handovers frequently occur in our social environments, making it imperative
for a collaborative robotic system to master the skill of handover. In this
work, we aim to investigate the relationship between the grip force variation
for a human giver and the sensed interaction force-torque in human-human
handovers, utilizing a data-driven approach. A Long-Short Term Memory (LSTM)
network was trained to use the interaction force-torque in a handover to
predict the human grip force variation in advance. Further, we propose to
utilize the trained network to cause human-like grip force variation for a
robotic giver.Comment: Contributed to "Advances in Close Proximity Human-Robot
Collaboration" Workshop in 2022 IEEE-RAS International Conference on Humanoid
Robots (Humanoids 2022
- …