22 research outputs found
Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models
Advanced Driver Assistance Systems (ADAS) have made driving safer over the
last decade. They prepare vehicles for unsafe road conditions and alert drivers
if they perform a dangerous maneuver. However, many accidents are unavoidable
because by the time drivers are alerted, it is already too late. Anticipating
maneuvers beforehand can alert drivers before they perform the maneuver and
also give ADAS more time to avoid or prepare for the danger.
In this work we anticipate driving maneuvers a few seconds before they occur.
For this purpose we equip a car with cameras and a computing device to capture
the driving context from both inside and outside of the car. We propose an
Autoregressive Input-Output HMM to model the contextual information alongwith
the maneuvers. We evaluate our approach on a diverse data set with 1180 miles
of natural freeway and city driving and show that we can anticipate maneuvers
3.5 seconds before they occur with over 80\% F1-score in real-time.Comment: ICCV 2015, http://brain4cars.co
Robo brain: Large-scale knowledge engine for robots
Abstract-In this paper we introduce a knowledge engine, which learns and shares knowledge representations, for robots to carry out a variety of tasks. Building such an engine brings with it the challenge of dealing with multiple data modalities including symbols, natural language, haptic senses, robot trajectories, visual features and many others. The knowledge stored in the engine comes from multiple sources including physical interactions that robots have while performing tasks (perception, planning and control), knowledge bases from WWW and learned representations from leading robotics research groups. We discuss various technical aspects and associated challenges such as modeling the correctness of knowledge, inferring latent information and formulating different robotic tasks as queries to the knowledge engine. We describe the system architecture and how it supports different mechanisms for users and robots to interact with the engine. Finally, we demonstrate its use in three important research areas: grounding natural language, perception, and planning, which are the key building blocks for many robotic tasks. This knowledge engine is a collaborative effort and we call it RoboBrain
Similarity-Based Processing of Motion Capture Data
Motion capture technologies digitize human movements by tracking 3D positions of specific skeleton joints in time. Such spatio-temporal data have an enormous application potential in many fields, ranging from computer animation, through security and sports to medicine, but their computerized processing is a difficult problem. The recorded data can be imprecise, voluminous, and the same movement action can be performed by various subjects in a number of alternatives that can vary in speed, timing or a position in space. This requires employing completely different data-processing paradigms compared to the traditional domains such as attributes, text or images. The objective of this tutorial is to explain fundamental principles and technologies designed for similarity comparison, searching, subsequence matching, classification and action detection in the motion capture data. Specifically, we emphasize the importance of similarity needed to express the degree of accordance between pairs of motion sequences and also discuss the machine-learning approaches able to automatically acquire content-descriptive movement features. We explain how the concept of similarity together with the learned features can be employed for searching similar occurrences of interested actions within a long motion sequence. Assuming a user-provided categorization of example motions, we discuss techniques able to recognize types of specific movement actions and detect such kinds of actions within continuous motion sequences. Selected operations will be demonstrated by on-line web applications
Optimum imaging strategies for advanced prostate cancer: ASCO guideline
PURPOSE Provide evidence- and expert-based recommendations for optimal use of imaging in advanced prostate cancer. Due to increases in research and utilization of novel imaging for advanced prostate cancer, this guideline is intended to outline techniques available and provide recommendations on appropriate use of imaging for specified patient subgroups. METHODS An Expert Panel was convened with members from ASCO and the Society of Abdominal Radiology, American College of Radiology, Society of Nuclear Medicine and Molecular Imaging, American Urological Association, American Society for Radiation Oncology, and Society of Urologic Oncology to conduct a systematic review of the literature and develop an evidence-based guideline on the optimal use of imaging for advanced prostate cancer. Representative index cases of various prostate cancer disease states are presented, including suspected high-risk disease, newly diagnosed treatment-naïve metastatic disease, suspected recurrent disease after local treatment, and progressive disease while undergoing systemic treatment. A systematic review of the literature from 2013 to August 2018 identified fully published English-language systematic reviews with or without meta-analyses, reports of rigorously conducted phase III randomized controlled trials that compared $ 2 imaging modalities, and noncomparative studies that reported on the efficacy of a single imaging modality. RESULTS A total of 35 studies met inclusion criteria and form the evidence base, including 17 systematic reviews with or without meta-analysis and 18 primary research articles. RECOMMENDATIONS One or more of these imaging modalities should be used for patients with advanced prostate cancer: conventional imaging (defined as computed tomography [CT], bone scan, and/or prostate magnetic resonance imaging [MRI]) and/or next-generation imaging (NGI), positron emission tomography [PET], PET/CT, PET/MRI, or whole-body MRI) according to the clinical scenario