651 research outputs found
Proceedings Work-In-Progress Session of the 13th Real-Time and Embedded Technology and Applications Symposium
The Work-In-Progress session of the 13th IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS\u2707) presents papers describing contributions both to state of the art and state of the practice in the broad field of real-time and embedded systems. The 17 accepted papers were selected from 19 submissions. This proceedings is also available as Washington University in St. Louis Technical Report WUCSE-2007-17, at http://www.cse.seas.wustl.edu/Research/FileDownload.asp?733. Special thanks go to the General Chairs β Steve Goddard and Steve Liu and Program Chairs - Scott Brandt and Frank Mueller for their support and guidance
Challenges in Integrating IoT in Smart Home
Wireless devices have become a major part in Smart Home industry. Almost every smart home company has its own wireless solutions and cloud services. Normally, customers can only monitor and control smart devices through applications or platforms companies provided. It causes inconveniences and problems when we have lots of smart devices. In my master project, I did two projects to implement smart home IoT applications. From a single functionality IoT application to a more complicated smart home system, there are lots of challenges and problems appeared. This article will mainly focus on challenges in integrating IoT in a smart home
Image-Graph-Image Translation via Auto-Encoding
This work presents the first convolutional neural network that learns an
image-to-graph translation task without needing external supervision. Obtaining
graph representations of image content, where objects are represented as nodes
and their relationships as edges, is an important task in scene understanding.
Current approaches follow a fully-supervised approach thereby requiring
meticulous annotations. To overcome this, we are the first to present a
self-supervised approach based on a fully-differentiable auto-encoder in which
the bottleneck encodes the graph's nodes and edges. This self-supervised
approach can currently encode simple line drawings into graphs and obtains
comparable results to a fully-supervised baseline in terms of F1 score on
triplet matching. Besides these promising results, we provide several
directions for future research on how our approach can be extended to cover
more complex imagery
Semantic Foreground Inpainting from Weak Supervision
Semantic scene understanding is an essential task for self-driving vehicles
and mobile robots. In our work, we aim to estimate a semantic segmentation map,
in which the foreground objects are removed and semantically inpainted with
background classes, from a single RGB image. This semantic foreground
inpainting task is performed by a single-stage convolutional neural network
(CNN) that contains our novel max-pooling as inpainting (MPI) module, which is
trained with weak supervision, i.e., it does not require manual background
annotations for the foreground regions to be inpainted. Our approach is
inherently more efficient than the previous two-stage state-of-the-art method,
and outperforms it by a margin of 3% IoU for the inpainted foreground regions
on Cityscapes. The performance margin increases to 6% IoU, when tested on the
unseen KITTI dataset. The code and the manually annotated datasets for testing
are shared with the research community at
https://github.com/Chenyang-Lu/semantic-foreground-inpainting.Comment: RA-L and ICRA'2
- β¦