66 research outputs found

    Development and Characteristics of a Highly Biomimetic Robotic Shoulder Through Bionics-Inspired Optimization

    Full text link
    This paper critically analyzes conventional and biomimetic robotic arms, underscoring the trade-offs between size, motion range, and load capacity in current biomimetic models. By delving into the human shoulder's mechanical intelligence, particularly the glenohumeral joint's intricate features such as its unique ball-and-socket structure and self-locking mechanism, we pinpoint innovations that bolster both stability and mobility while maintaining compactness. To substantiate these insights, we present a groundbreaking biomimetic robotic glenohumeral joint that authentically mirrors human musculoskeletal elements, from ligaments to tendons, integrating the biological joint's mechanical intelligence. Our exhaustive simulations and tests reveal enhanced flexibility and load capacity for the robotic joint. The advanced robotic arm demonstrates notable capabilities, including a significant range of motions and a 4 kg payload capacity, even exerting over 1.5 Nm torque. This study not only confirms the human shoulder joint's mechanical innovations but also introduces a pioneering design for a next-generation biomimetic robotic arm, setting a new benchmark in robotic technology

    Enhancing the Performance of a Biomimetic Robotic Elbow-and-Forearm System Through Bionics-Inspired Optimization

    Full text link
    This paper delineates the formulation and verification of an innovative robotic forearm and elbow design, mirroring the intricate biomechanics of human skeletal and ligament systems. Conventional robotic models often undervalue the substantial function of soft tissues, leading to a compromise between compactness, safety, stability, and range of motion. In contrast, this study proposes a holistic replication of biological joints, encompassing bones, cartilage, ligaments, and tendons, culminating in a biomimetic robot. The research underscores the compact and stable structure of the human forearm, attributable to a tri-bone framework and diverse soft tissues. The methodology involves exhaustive examinations of human anatomy, succeeded by a theoretical exploration of the contribution of soft tissues to the stability of the prototype. The evaluation results unveil remarkable parallels between the range of motion of the robotic joints and their human counterparts. The robotic elbow emulates 98.8% of the biological elbow's range of motion, with high torque capacities of 11.25 Nm (extension) and 24 Nm (flexion). Similarly, the robotic forearm achieves 58.6% of the human forearm's rotational range, generating substantial output torques of 14 Nm (pronation) and 7.8 Nm (supination). Moreover, the prototype exhibits significant load-bearing abilities, resisting a 5kg dumbbell load without substantial displacement. It demonstrates a payload capacity exceeding 4kg and rapid action capabilities, such as lifting a 2kg dumbbell at a speed of 0.74Hz and striking a ping-pong ball at an end-effector speed of 3.2 m/s. This research underscores that a detailed anatomical study can address existing robotic design obstacles, optimize performance and anthropomorphic resemblance, and reaffirm traditional anatomical principles

    Leveraging Foundation models for Unsupervised Audio-Visual Segmentation

    Full text link
    Audio-Visual Segmentation (AVS) aims to precisely outline audible objects in a visual scene at the pixel level. Existing AVS methods require fine-grained annotations of audio-mask pairs in supervised learning fashion. This limits their scalability since it is time consuming and tedious to acquire such cross-modality pixel level labels. To overcome this obstacle, in this work we introduce unsupervised audio-visual segmentation with no need for task-specific data annotations and model training. For tackling this newly proposed problem, we formulate a novel Cross-Modality Semantic Filtering (CMSF) approach to accurately associate the underlying audio-mask pairs by leveraging the off-the-shelf multi-modal foundation models (e.g., detection [1], open-world segmentation [2] and multi-modal alignment [3]). Guiding the proposal generation by either audio or visual cues, we design two training-free variants: AT-GDINO-SAM and OWOD-BIND. Extensive experiments on the AVS-Bench dataset show that our unsupervised approach can perform well in comparison to prior art supervised counterparts across complex scenarios with multiple auditory objects. Particularly, in situations where existing supervised AVS methods struggle with overlapping foreground objects, our models still excel in accurately segmenting overlapped auditory objects. Our code will be publicly released

    Compliant actuators that mimic biological muscle performance with applications in a highly biomimetic robotic arm

    Full text link
    This paper endeavours to bridge the existing gap in muscular actuator design for ligament-skeletal-inspired robots, thereby fostering the evolution of these robotic systems. We introduce two novel compliant actuators, namely the Internal Torsion Spring Compliant Actuator (ICA) and the External Spring Compliant Actuator (ECA), and present a comparative analysis against the previously conceived Magnet Integrated Soft Actuator (MISA) through computational and experimental results. These actuators, employing a motor-tendon system, emulate biological muscle-like forms, enhancing artificial muscle technology. A robotic arm application inspired by the skeletal ligament system is presented. Experiments demonstrate satisfactory power in tasks like lifting dumbbells (peak power: 36W), playing table tennis (end-effector speed: 3.2 m/s), and door opening, without compromising biomimetic aesthetics. Compared to other linear stiffness serial elastic actuators (SEAs), ECA and ICA exhibit high power-to-volume (361 x 10^3 W/m) and power-to-mass (111.6 W/kg) ratios respectively, endorsing the biomimetic design's promise in robotic development

    Lithium-Excess Research of Cathode Material Li2MnTiO4 for Lithium-Ion Batteries

    Get PDF
    Lithium-excess and nano-sized Li2+xMn1−x/2TiO4 (x = 0, 0.2, 0.4) cathode materials were synthesized via a sol-gel method. The X-ray diffraction (XRD) experiments indicate that the obtained main phases of Li2.0MnTiO4 and the lithium-excess materials are monoclinic and cubic, respectively. The scanning electron microscope (SEM) images show that the as-prepared particles are well distributed and the primary particles have an average size of about 20–30 nm. The further electrochemical tests reveal that the charge-discharge performance of the material improves remarkably with the lithium content increasing. Particularly, the first discharging capacity at the current of 30 mA g−1 increases from 112.2 mAh g−1 of Li2.0MnTiO4 to 187.5 mAh g−1 of Li2.4Mn0.8TiO4. In addition, the ex situ XRD experiments indicate that the monoclinic Li2MnTiO4 tends to transform to an amorphous state with the extraction of lithium ions, while the cubic Li2MnTiO4 phase shows better structural reversibility and stability

    Recognize Any Regions

    Full text link
    Understanding the semantics of individual regions or patches within unconstrained images, such as in open-world object detection, represents a critical yet challenging task in computer vision. Building on the success of powerful image-level vision-language (ViL) foundation models like CLIP, recent efforts have sought to harness their capabilities by either training a contrastive model from scratch with an extensive collection of region-label pairs or aligning the outputs of a detection model with image-level representations of region proposals. Despite notable progress, these approaches are plagued by computationally intensive training requirements, susceptibility to data noise, and deficiency in contextual information. To address these limitations, we explore the synergistic potential of off-the-shelf foundation models, leveraging their respective strengths in localization and semantics. We introduce a novel, generic, and efficient region recognition architecture, named RegionSpot, designed to integrate position-aware localization knowledge from a localization foundation model (e.g., SAM) with semantic information extracted from a ViL model (e.g., CLIP). To fully exploit pretrained knowledge while minimizing training overhead, we keep both foundation models frozen, focusing optimization efforts solely on a lightweight attention-based knowledge integration module. Through extensive experiments in the context of open-world object recognition, our RegionSpot demonstrates significant performance improvements over prior alternatives, while also providing substantial computational savings. For instance, training our model with 3 million data in a single day using 8 V100 GPUs. Our model outperforms GLIP by 6.5 % in mean average precision (mAP), with an even larger margin by 14.8 % for more challenging and rare categories

    IRF4 suppresses osteogenic differentiation of BM-MSCs by transcriptionally activating miR-636/DOCK9 axis

    Get PDF
    Objectives: Osteoblasts are derived from Bone Marrow-derived Mesenchymal Stem Cells (BM-MSCs), which play an indispensable role in bone formation. In this study, the authors aim to investigate the role of IRF4 in the osteogenic differentiation of BM-MSCs and its potential molecular mechanism. Methods: The authors used lentivirus infection to overexpress IRF4 in BM-MSCs. The expression of IRF4 and osteogenesis-related genes were detected by qRT-PCR and western blot analysis. The osteogenic differentiation of BM-MSCs was evaluated by Alkaline Phosphatase (ALP) activity, Alizarin red staining, and Alkaline Phosphatase (ALP) staining. Chromatin Immunoprecipitation (ChIP), Dual-Luciferase reporter assay and RNA Immunoprecipitation Assay were applied to confirm the regulatory mechanism between IRF4, miR-636 and DOCK9. Results: The authors found IRF4 was down-regulated during the osteogenic differentiation of BM-MSCs, and IRF4 overexpression could decrease the osteogenic differentiation of BM-MSCs by specifically promoting the reduction of Alkaline Phosphatase (ALP) activity and down-regulating osteogenic indicators, including OCN, OPN, Runx2 and CollA1. Mechanistically, IRF4 activated microRNA-636 (miR-636) expression via binding to its promoter region, and Dedicator of Cytokinesis 9 (DOCK9) was identified as the target of miR-636 in BM-MSCs. Moreover, the damage in the capacity of osteogenic differentiation of BM-MSCs induced by IRF4 overexpression could be rescued by miR-636 inhibition. Conclusions: In summary, this paper proposed that IRF4/miR-636/DOCK9 may be considered as targets for the treatment of osteoporosis (OP)

    Self-supervised Video Representation Learning with Motion-Aware Masked Autoencoders

    Full text link
    Masked autoencoders (MAEs) have emerged recently as art self-supervised spatiotemporal representation learners. Inheriting from the image counterparts, however, existing video MAEs still focus largely on static appearance learning whilst are limited in learning dynamic temporal information hence less effective for video downstream tasks. To resolve this drawback, in this work we present a motion-aware variant -- MotionMAE. Apart from learning to reconstruct individual masked patches of video frames, our model is designed to additionally predict the corresponding motion structure information over time. This motion information is available at the temporal difference of nearby frames. As a result, our model can extract effectively both static appearance and dynamic motion spontaneously, leading to superior spatiotemporal representation learning capability. Extensive experiments show that our MotionMAE outperforms significantly both supervised learning baseline and state-of-the-art MAE alternatives, under both domain-specific and domain-generic pretraining-then-finetuning settings. In particular, when using ViT-B as the backbone our MotionMAE surpasses the prior art model by a margin of 1.2% on Something-Something V2 and 3.2% on UCF101 in domain-specific pretraining setting. Encouragingly, it also surpasses the competing MAEs by a large margin of over 3% on the challenging video object segmentation task. The code is available at https://github.com/happy-hsy/MotionMAE.Comment: 17 pages, 6 figure
    • …
    corecore