20,273 research outputs found
Meso-scale FDM material layout design strategies under manufacturability constraints and fracture conditions
In the manufacturability-driven design (MDD) perspective, manufacturability of the product or system is the most important of the design requirements. In addition to being able to ensure that complex designs (e.g., topology optimization) are manufacturable with a given process or process family, MDD also helps mechanical designers to take advantage of unique process-material effects generated during manufacturing. One of the most recognizable examples of this comes from the scanning-type family of additive manufacturing (AM) processes; the most notable and familiar member of this family is the fused deposition modeling (FDM) or fused filament fabrication (FFF) process. This process works by selectively depositing uniform, approximately isotropic beads or elements of molten thermoplastic material (typically structural engineering plastics) in a series of pre-specified traces to build each layer of the part. There are many interesting 2-D and 3-D mechanical design problems that can be explored by designing the layout of these elements. The resulting structured, hierarchical material (which is both manufacturable and customized layer-by-layer within the limits of the process and material) can be defined as a manufacturing process-driven structured material (MPDSM). This dissertation explores several practical methods for designing these element layouts for 2-D and 3-D meso-scale mechanical problems, focusing ultimately on design-for-fracture. Three different fracture conditions are explored: (1) cases where a crack must be prevented or stopped, (2) cases where the crack must be encouraged or accelerated, and (3) cases where cracks must grow in a simple pre-determined pattern. Several new design tools, including a mapping method for the FDM manufacturability constraints, three major literature reviews, the collection, organization, and analysis of several large (qualitative and quantitative) multi-scale datasets on the fracture behavior of FDM-processed materials, some new experimental equipment, and the refinement of a fast and simple g-code generator based on commercially-available software, were developed and refined to support the design of MPDSMs under fracture conditions. The refined design method and rules were experimentally validated using a series of case studies (involving both design and physical testing of the designs) at the end of the dissertation. Finally, a simple design guide for practicing engineers who are not experts in advanced solid mechanics nor process-tailored materials was developed from the results of this project.U of I OnlyAuthor's request
Towards Autonomous Selective Harvesting: A Review of Robot Perception, Robot Design, Motion Planning and Control
This paper provides an overview of the current state-of-the-art in selective
harvesting robots (SHRs) and their potential for addressing the challenges of
global food production. SHRs have the potential to increase productivity,
reduce labour costs, and minimise food waste by selectively harvesting only
ripe fruits and vegetables. The paper discusses the main components of SHRs,
including perception, grasping, cutting, motion planning, and control. It also
highlights the challenges in developing SHR technologies, particularly in the
areas of robot design, motion planning and control. The paper also discusses
the potential benefits of integrating AI and soft robots and data-driven
methods to enhance the performance and robustness of SHR systems. Finally, the
paper identifies several open research questions in the field and highlights
the need for further research and development efforts to advance SHR
technologies to meet the challenges of global food production. Overall, this
paper provides a starting point for researchers and practitioners interested in
developing SHRs and highlights the need for more research in this field.Comment: Preprint: to be appeared in Journal of Field Robotic
The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution
ShakingBot: Dynamic Manipulation for Bagging
Bag manipulation through robots is complex and challenging due to the
deformability of the bag. Based on dynamic manipulation strategy, we propose a
new framework, ShakingBot, for the bagging tasks. ShakingBot utilizes a
perception module to identify the key region of the plastic bag from arbitrary
initial configurations. According to the segmentation, ShakingBot iteratively
executes a novel set of actions, including Bag Adjustment, Dual-arm Shaking,
and One-arm Holding, to open the bag. The dynamic action, Dual-arm Shaking, can
effectively open the bag without the need to account for the crumpled
configuration.Then, we insert the items and lift the bag for transport. We
perform our method on a dual-arm robot and achieve a success rate of 21/33 for
inserting at least one item across various initial bag configurations. In this
work, we demonstrate the performance of dynamic shaking actions compared to the
quasi-static manipulation in the bagging task. We also show that our method
generalizes to variations despite the bag's size, pattern, and color.Comment: Manipulating bag through robots to baggin
Bounding Box Annotation with Visible Status
Training deep-learning-based vision systems requires the manual annotation of
a significant amount of data to optimize several parameters of the deep
convolutional neural networks. Such manual annotation is highly time-consuming
and labor-intensive. To reduce this burden, a previous study presented a fully
automated annotation approach that does not require any manual intervention.
The proposed method associates a visual marker with an object and captures it
in the same image. However, because the previous method relied on moving the
object within the capturing range using a fixed-point camera, the collected
image dataset was limited in terms of capturing viewpoints. To overcome this
limitation, this study presents a mobile application-based free-viewpoint
image-capturing method. With the proposed application, users can collect
multi-view image datasets automatically that are annotated with bounding boxes
by moving the camera. However, capturing images through human involvement is
laborious and monotonous. Therefore, we propose gamified application features
to track the progress of the collection status. Our experiments demonstrated
that using the gamified mobile application for bounding box annotation, with
visible collection progress status, can motivate users to collect multi-view
object image datasets with less mental workload and time pressure in an
enjoyable manner, leading to increased engagement.Comment: 10 pages, 16 figure
UniverSeg: Universal Medical Image Segmentation
While deep learning models have become the predominant method for medical
image segmentation, they are typically not capable of generalizing to unseen
segmentation tasks involving new anatomies, image modalities, or labels. Given
a new segmentation task, researchers generally have to train or fine-tune
models, which is time-consuming and poses a substantial barrier for clinical
researchers, who often lack the resources and expertise to train neural
networks. We present UniverSeg, a method for solving unseen medical
segmentation tasks without additional training. Given a query image and example
set of image-label pairs that define a new segmentation task, UniverSeg employs
a new Cross-Block mechanism to produce accurate segmentation maps without the
need for additional training. To achieve generalization to new tasks, we have
gathered and standardized a collection of 53 open-access medical segmentation
datasets with over 22,000 scans, which we refer to as MegaMedical. We used this
collection to train UniverSeg on a diverse set of anatomies and imaging
modalities. We demonstrate that UniverSeg substantially outperforms several
related methods on unseen tasks, and thoroughly analyze and draw insights about
important aspects of the proposed system. The UniverSeg source code and model
weights are freely available at https://universeg.csail.mit.eduComment: Victor and Jose Javier contributed equally to this work. Project
Website: https://universeg.csail.mit.ed
Concept Graph Neural Networks for Surgical Video Understanding
We constantly integrate our knowledge and understanding of the world to
enhance our interpretation of what we see.
This ability is crucial in application domains which entail reasoning about
multiple entities and concepts, such as AI-augmented surgery. In this paper, we
propose a novel way of integrating conceptual knowledge into temporal analysis
tasks via temporal concept graph networks. In the proposed networks, a global
knowledge graph is incorporated into the temporal analysis of surgical
instances, learning the meaning of concepts and relations as they apply to the
data. We demonstrate our results in surgical video data for tasks such as
verification of critical view of safety, as well as estimation of Parkland
grading scale. The results show that our method improves the recognition and
detection of complex benchmarks as well as enables other analytic applications
of interest
Visual search patterns for multilingual word search puzzles, a pilot study
Word search puzzles are recognized as a valid word recognition task. Eye gaze patterns have been investigated during visual search and reading, but the word search puzzle requires both searching and word recognition. This paper will discuss findings from an eye-tracking study of word search puzzles in three languages, of varying fluency for the participants. Results indicated that participants employ a search strategy that is somewhat dependent on language fluency and varies from a rigid, structured search pattern to randomly searching for a target word. The majority of gaze measurements are not significantly influenced by either word length or fluency of presented language, although mean fixation durations are longer for shorter words
ARA-net: an attention-aware retinal atrophy segmentation network coping with fundus images
BackgroundAccurately detecting and segmenting areas of retinal atrophy are paramount for early medical intervention in pathological myopia (PM). However, segmenting retinal atrophic areas based on a two-dimensional (2D) fundus image poses several challenges, such as blurred boundaries, irregular shapes, and size variation. To overcome these challenges, we have proposed an attention-aware retinal atrophy segmentation network (ARA-Net) to segment retinal atrophy areas from the 2D fundus image.MethodsIn particular, the ARA-Net adopts a similar strategy as UNet to perform the area segmentation. Skip self-attention connection (SSA) block, comprising a shortcut and a parallel polarized self-attention (PPSA) block, has been proposed to deal with the challenges of blurred boundaries and irregular shapes of the retinal atrophic region. Further, we have proposed a multi-scale feature flow (MSFF) to challenge the size variation. We have added the flow between the SSA connection blocks, allowing for capturing considerable semantic information to detect retinal atrophy in various area sizes.ResultsThe proposed method has been validated on the Pathological Myopia (PALM) dataset. Experimental results demonstrate that our method yields a high dice coefficient (DICE) of 84.26%, Jaccard index (JAC) of 72.80%, and F1-score of 84.57%, which outperforms other methods significantly.ConclusionOur results have demonstrated that ARA-Net is an effective and efficient approach for retinal atrophic area segmentation in PM
Copy-paste data augmentation for domain transfer on traffic signs
City streets carry a lot of information that can be exploited to improve the quality of the services the citizens receive. For example, autonomous vehicles need to act accordingly to all the element that are nearby the vehicle itself, like pedestrians, traffic signs and other vehicles. It is also possible to use such information for smart city applications, for example to predict and analyze the traffic or pedestrian flows.
Among all the objects that it is possible to find in a street, traffic signs are very important because of the information they carry. This information can in fact be exploited both for autonomous driving and for smart city applications. Deep learning and, more generally, machine learning models however need huge quantities to learn. Even though modern models are very good at gener- alizing, the more samples the model has, the better it can generalize between different samples.
Creating these datasets organically, namely with real pictures, is a very tedious task because of the wide variety of signs available in the whole world and especially because of all the possible light, orientation conditions and con- ditions in general in which they can appear. In addition to that, it may not be easy to collect enough samples for all the possible traffic signs available, cause some of them may be very rare to find.
Instead of collecting pictures manually, it is possible to exploit data aug- mentation techniques to create synthetic datasets containing the signs that are needed. Creating this data synthetically allows to control the distribution and the conditions of the signs in the datasets, improving the quality and quantity of training data that is going to be used. This thesis work is about using copy-paste data augmentation to create synthetic data for the traffic sign recognition task
- …