186,700 research outputs found
Quick-cast: A method for fast and precise scalable production of fluid-driven elastomeric soft actuators
Fluid-driven elastomeric actuators (FEAs) are among the most popular
actuators in the emerging field of soft robotics. Intrinsically compliant, with
continuum of motion, large strokes, little friction, and high power-to-weight
ratio, they are very similar to biological muscles, and have enabled new
applications in automation, architecture, medicine, and human-robot
interaction. To foster future applications of FEAs, in this paper we present a
new manufacturing method for fast and precise scalable production of complex
FEAs of high quality (leak-free, single-body form, with <0.2 mm precision). The
method is based on 3d moulding and supports elastomers with a wide range of
viscosity, pot life, and Young's modulus. We developed this process for two
different settings: one in laboratory conditions for fast prototyping with 3d
printed moulds and using multi-component liquid elastomers, and the other
process in an industrial setting with 3d moulds micromachined in metal and
applying compression moulding. We demonstrate these methods in fabrication of
up to several tens of two-axis, three-chambered soft actuators, with two types
of chamber walls: cylindrical and corrugated. The actuators are then applied as
motion drivers in kinetic photovoltaic building envelopes
End-to-End Learning of Representations for Asynchronous Event-Based Data
Event cameras are vision sensors that record asynchronous streams of
per-pixel brightness changes, referred to as "events". They have appealing
advantages over frame-based cameras for computer vision, including high
temporal resolution, high dynamic range, and no motion blur. Due to the sparse,
non-uniform spatiotemporal layout of the event signal, pattern recognition
algorithms typically aggregate events into a grid-based representation and
subsequently process it by a standard vision pipeline, e.g., Convolutional
Neural Network (CNN). In this work, we introduce a general framework to convert
event streams into grid-based representations through a sequence of
differentiable operations. Our framework comes with two main advantages: (i)
allows learning the input event representation together with the task dedicated
network in an end to end manner, and (ii) lays out a taxonomy that unifies the
majority of extant event representations in the literature and identifies novel
ones. Empirically, we show that our approach to learning the event
representation end-to-end yields an improvement of approximately 12% on optical
flow estimation and object recognition over state-of-the-art methods.Comment: To appear at ICCV 201
Fast YOLO: A Fast You Only Look Once System for Real-time Embedded Object Detection in Video
Object detection is considered one of the most challenging problems in this
field of computer vision, as it involves the combination of object
classification and object localization within a scene. Recently, deep neural
networks (DNNs) have been demonstrated to achieve superior object detection
performance compared to other approaches, with YOLOv2 (an improved You Only
Look Once model) being one of the state-of-the-art in DNN-based object
detection methods in terms of both speed and accuracy. Although YOLOv2 can
achieve real-time performance on a powerful GPU, it still remains very
challenging for leveraging this approach for real-time object detection in
video on embedded computing devices with limited computational power and
limited memory. In this paper, we propose a new framework called Fast YOLO, a
fast You Only Look Once framework which accelerates YOLOv2 to be able to
perform object detection in video on embedded devices in a real-time manner.
First, we leverage the evolutionary deep intelligence framework to evolve the
YOLOv2 network architecture and produce an optimized architecture (referred to
as O-YOLOv2 here) that has 2.8X fewer parameters with just a ~2% IOU drop. To
further reduce power consumption on embedded devices while maintaining
performance, a motion-adaptive inference method is introduced into the proposed
Fast YOLO framework to reduce the frequency of deep inference with O-YOLOv2
based on temporal motion characteristics. Experimental results show that the
proposed Fast YOLO framework can reduce the number of deep inferences by an
average of 38.13%, and an average speedup of ~3.3X for objection detection in
video compared to the original YOLOv2, leading Fast YOLO to run an average of
~18FPS on a Nvidia Jetson TX1 embedded system
From ‘hands up’ to ‘hands on’: harnessing the kinaesthetic potential of educational gaming
Traditional approaches to distance learning and the student learning journey have focused on closing the gap between the experience of off-campus students and their on-campus peers. While many initiatives have sought to embed a sense of community, create virtual learning environments and even build collaborative spaces for team-based assessment and presentations, they are limited by technological innovation in terms of the types of learning styles they support and develop. Mainstream gaming development – such as with the Xbox Kinect and Nintendo Wii – have a strong element of kinaesthetic learning from early attempts to simulate impact, recoil, velocity and other environmental factors to the more sophisticated movement-based games which create a sense of almost total immersion and allow untethered (in a technical sense) interaction with the games’ objects, characters and other players. Likewise, gamification of learning has become a critical focus for the engagement of learners and its commercialisation, especially through products such as the Wii Fit.
As this technology matures, there are strong opportunities for universities to utilise gaming consoles to embed levels of kinaesthetic learning into the student experience – a learning style which has been largely neglected in the distance education sector. This paper will explore the potential impact of these technologies, to broadly imagine the possibilities for future innovation in higher education
Towards Social Autonomous Vehicles: Efficient Collision Avoidance Scheme Using Richardson's Arms Race Model
Background Road collisions and casualties pose a serious threat to commuters
around the globe. Autonomous Vehicles (AVs) aim to make the use of technology
to reduce the road accidents. However, the most of research work in the context
of collision avoidance has been performed to address, separately, the rear end,
front end and lateral collisions in less congested and with high
inter-vehicular distances. Purpose The goal of this paper is to introduce the
concept of a social agent, which interact with other AVs in social manners like
humans are social having the capability of predicting intentions, i.e.
mentalizing and copying the actions of each other, i.e. mirroring. The proposed
social agent is based on a human-brain inspired mentalizing and mirroring
capabilities and has been modelled for collision detection and avoidance under
congested urban road traffic.
Method We designed our social agent having the capabilities of mentalizing
and mirroring and for this purpose we utilized Exploratory Agent Based Modeling
(EABM) level of Cognitive Agent Based Computing (CABC) framework proposed by
Niazi and Hussain.
Results Our simulation and practical experiments reveal that by embedding
Richardson's arms race model within AVs, collisions can be avoided while
travelling on congested urban roads in a flock like topologies. The performance
of the proposed social agent has been compared at two different levels.Comment: 48 pages, 21 figure
- …