182 research outputs found
Design and implementation of biomimetic robotic fish Hongan Wang.
The study of biomimetic robotic fish has received a growing amount of research interest in the past several years. This thesis describes the development and testing of a novel mechanical design of a biomimetic robotic fish. The robotic fish has a structure which uses oscillating caudal fins and a pair of pectoral fins to generate fish-like swimming motion. This unique design enables the robotic fish to swim in two swimming modes, namely Body/Caudal Fin (BCF) and Median/Paired Fin (MPF). In order to combine BCF mode with MPF mode, the robotic fish utilizes a flexible posterior body, an oscillating foil actuated by three servomotors, and one pair of pectoral fins individually driven by four servomotors. Effective servo motions and swimming gaits are then proposed to control its swimming behaviour. Based on these results, fish-like swimming can be achieved including forward, backward, and turning motions. An experimental setup for the robotic fish was implemented using machine vision position and velocity measurement. The experimental results show that the robotic fish performed well in terms of manoeuvrability and cruise speed. Based on the experimental data, a low order dynamic model is proposed and identified. Together, these results provide an experimental framework for development of new modelling and control techniques for biomimetic robotic fish
Modeling Sketching Primitives to Support Freehand Drawing Based on Context Awareness
Freehand drawing is an easy and intuitive method for thinking input and output. In sketch based interface, there lack support for natural sketching with drawing cues, like overlapping, overlooping, hatching, etc. which happen frequently in physical pen and paper. In this paper, we analyze some characters of drawing cues in sketch based interface and describe the different types of sketching primitives. An improved sketch information model is given and the idea is to present and record design thinking during freehand drawing process with individuality and diversification. The interaction model based on context is developed which can guide and help new sketch-based interface development. New applications with different context contents can be easily derived from it and developed further. Our approach can support the tasks that are common across applications, requiring the designer to only provide support for the application-specific tasks. It is capable of and applicable for modeling various sketching interfaces and applications. Finally, we illustrate the general operations of the system by examples in different applications
Distributed Utilization Control for Real-time Clusters with Load Balancing
Recent years have seen rapid growth of online services that rely on large-scale server clusters to handle high volume of requests. Such clusters must adaptively control the CPU utilizations of many processors in order to maintain desired soft real-time performance and prevent system overload in face of unpredictable workloads. This paper presents DUC-LB, a novel distributed utilization control algorithm for cluster-based soft real-time applications. Compared to earlier works on utilization control, a distinguishing feature of DUC-LB is its capability to handle system dynamics caused by load balancing, which is a common and essential component of most clusters today. Simulation results and control-theoretic analysis demonstrate that DUC-LB can provide robust utilization control and effective load balancing in large-scale clusters
Novel-view Synthesis and Pose Estimation for Hand-Object Interaction from Sparse Views
Hand-object interaction understanding and the barely addressed novel view
synthesis are highly desired in the immersive communication, whereas it is
challenging due to the high deformation of hand and heavy occlusions between
hand and object. In this paper, we propose a neural rendering and pose
estimation system for hand-object interaction from sparse views, which can also
enable 3D hand-object interaction editing. We share the inspiration from recent
scene understanding work that shows a scene specific model built beforehand can
significantly improve and unblock vision tasks especially when inputs are
sparse, and extend it to the dynamic hand-object interaction scenario and
propose to solve the problem in two stages. We first learn the shape and
appearance prior knowledge of hands and objects separately with the neural
representation at the offline stage. During the online stage, we design a
rendering-based joint model fitting framework to understand the dynamic
hand-object interaction with the pre-built hand and object models as well as
interaction priors, which thereby overcomes penetration and separation issues
between hand and object and also enables novel view synthesis. In order to get
stable contact during the hand-object interaction process in a sequence, we
propose a stable contact loss to make the contact region to be consistent.
Experiments demonstrate that our method outperforms the state-of-the-art
methods. Code and dataset are available in project webpage
https://iscas3dv.github.io/HO-NeRF
SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking Effectiveness
As communications are increasingly taking place virtually, the ability to
present well online is becoming an indispensable skill. Online speakers are
facing unique challenges in engaging with remote audiences. However, there has
been a lack of evidence-based analytical systems for people to comprehensively
evaluate online speeches and further discover possibilities for improvement.
This paper introduces SpeechMirror, a visual analytics system facilitating
reflection on a speech based on insights from a collection of online speeches.
The system estimates the impact of different speech techniques on effectiveness
and applies them to a speech to give users awareness of the performance of
speech techniques. A similarity recommendation approach based on speech factors
or script content supports guided exploration to expand knowledge of
presentation evidence and accelerate the discovery of speech delivery
possibilities. SpeechMirror provides intuitive visualizations and interactions
for users to understand speech factors. Among them, SpeechTwin, a novel
multimodal visual summary of speech, supports rapid understanding of critical
speech factors and comparison of different speech samples, and SpeechPlayer
augments the speech video by integrating visualization of the speaker's body
language with interaction, for focused analysis. The system utilizes
visualizations suited to the distinct nature of different speech factors for
user comprehension. The proposed system and visualization techniques were
evaluated with domain experts and amateurs, demonstrating usability for users
with low visualization literacy and its efficacy in assisting users to develop
insights for potential improvement.Comment: Main paper (11 pages, 6 figures) and Supplemental document (11 pages,
11 figures). Accepted by VIS 202
- …