1,393 research outputs found
Geometry-Based Next Frame Prediction from Monocular Video
We consider the problem of next frame prediction from video input. A
recurrent convolutional neural network is trained to predict depth from
monocular video input, which, along with the current video image and the camera
trajectory, can then be used to compute the next frame. Unlike prior next-frame
prediction approaches, we take advantage of the scene geometry and use the
predicted depth for generating the next frame prediction. Our approach can
produce rich next frame predictions which include depth information attached to
each pixel. Another novel aspect of our approach is that it predicts depth from
a sequence of images (e.g. in a video), rather than from a single still image.
We evaluate the proposed approach on the KITTI dataset, a standard dataset for
benchmarking tasks relevant to autonomous driving. The proposed method produces
results which are visually and numerically superior to existing methods that
directly predict the next frame. We show that the accuracy of depth prediction
improves as more prior frames are considered.Comment: To appear in 2017 IEEE Intelligent Vehicles Symposiu
Zur Implementierung des DFU in der Praxis â Der fremdsprachliche Sachfachunterricht an der Deutschen Schule San JosĂ© (Costa Rica)
In dem folgenden Beitrag geht es um das an der Deutschen Schule San José (Costa Rica) 2009 erfolgreich
implementierte Konzept fĂŒr âDeutsch als Fachunterrichtâ (DFU). Im Mittelpunkt stehen die konkrete inhaltliche und
methodische Ausgestaltung von DFU, erste erfolgversprechende AnsÀtze zur Unterrichtsentwicklung und QualitÀtssicherung
sowie der Einbezug neuer Medien
Zur Implementierung des DFU in der Praxis â Der fremdsprachliche Sachfachunterricht an der Deutschen Schule San JosĂ© (Costa Rica)
In dem folgenden Beitrag geht es um das an der Deutschen Schule San JosĂ© (Costa Rica) 2009 erfolgreich implementierte Konzept fĂŒr âDeutsch als Fachunterrichtâ (DFU). Im Mittelpunkt stehen die konkrete inhaltliche und methodische Ausgestaltung von DFU, erste erfolgversprechende AnsĂ€tze zur Unterrichtsentwicklung und QualitĂ€tssicherung sowie der Einbezug neuer Medien.The following article deals with the successful implementation of a concept for âContent and Language Integrated Learning in Germanâ (CLILiG) at the German Humboldt-School in San JosĂ© (Costa Rica) in 2009. It focusses on the concrete organisation of subjects taught in German with regards to content and method, first approaches to classroom-development and quality management as well as the integration of new media
TensorFlow Estimators: Managing Simplicity vs. Flexibility in High-Level Machine Learning Frameworks
We present a framework for specifying, training, evaluating, and deploying
machine learning models. Our focus is on simplifying cutting edge machine
learning for practitioners in order to bring such technologies into production.
Recognizing the fast evolution of the field of deep learning, we make no
attempt to capture the design space of all possible model architectures in a
domain- specific language (DSL) or similar configuration language. We allow
users to write code to define their models, but provide abstractions that guide
develop- ers to write models in ways conducive to productionization. We also
provide a unifying Estimator interface, making it possible to write downstream
infrastructure (e.g. distributed training, hyperparameter tuning) independent
of the model implementation. We balance the competing demands for flexibility
and simplicity by offering APIs at different levels of abstraction, making
common model architectures available out of the box, while providing a library
of utilities designed to speed up experimentation with model architectures. To
make out of the box models flexible and usable across a wide range of problems,
these canned Estimators are parameterized not only over traditional
hyperparameters, but also using feature columns, a declarative specification
describing how to interpret input data. We discuss our experience in using this
framework in re- search and production environments, and show the impact on
code health, maintainability, and development speed.Comment: 8 pages, Appeared at KDD 2017, August 13--17, 2017, Halifax, NS,
Canad
The implications of geopolitical, socioeconomic, and regulatory constraints on European bioenergy imports and associated greenhouse gas emissions to 2050
Modern sustainable bioenergy can contribute toward mid-century European energy decarbonization targets by replacing fossil fuels. Fulfilling this role would require access to increased volumes of bioenergy, with extra-EU imports projected to play an important part. Access to this resource on the international marketplace is not governed by Europe's economic competitiveness alone. This study investigates geopolitical, socioeconomic, and regulatory considerations that can influence Europe's bioenergy imports but that are so far underexplored. The effect of these constraints on European import volumes, sourcing regions, mitigation potential, and their implications for European and global emissions is projected to the year 2050 using a global integrated assessment model. The projections show that Europe can significantly increase imports from 1.5 EJ yearâ1 in 2020 to 8.1 EJ yearâ1 by 2050 whilst remaining compliant with Renewables Energy Directive recast II (RED II) greenhouse gas (GHG) criteria. Under these conditions, bioenergy could provide annual GHG mitigation of 0.44 GtCO2eq. in 2050. However, achieving this would require a structural diversification of trading partners from the present. Furthermore, socioeconomic and logistical concerns may limit the feasibility of some of the projected major sourcing regions, including Africa and South America. Failure to overcome these challenges within supplying regions could limit European imports by 60%, reducing annual mitigation to 0.16 GtCO2eq. in 2050. From a global perspective, regions with a comparatively carbon-intense energy system offer an alternative destination for globally traded biomass that could increase the mitigative potential of bioenergy
Patient-cooperative control increases active participation of individuals with SCI during robot-aided gait training
ABSTRACT: BACKGROUND: Manual body weight supported treadmill training and robot-aided treadmill training are frequently used techniques for the gait rehabilitation of individuals after stroke and spinal cord injury. Current evidence suggests that robot-aided gait training may be improved by making robotic behavior more patient-cooperative. In this study, we have investigated the immediate effects of patient-cooperative versus non-cooperative robot-aided gait training on individuals with incomplete spinal cord injury (iSCI). METHODS: Eleven patients with iSCI participated in a single training session with the gait rehabilitation robot Lokomat. The patients were exposed to four different training modes in random order: During both non-cooperative position control and compliant impedance control, fixed timing of movements was provided. During two variants of the patient-cooperative path control approach, free timing of movements was enabled and the robot provided only spatial guidance. The two variants of the path control approach differed in the amount of additional support, which was either individually adjusted or exaggerated. Joint angles and torques of the robot as well as muscle activity and heart rate of the patients were recorded. Kinematic variability, interaction torques, heart rate and muscle activity were compared between the different conditions. RESULTS: Patients showed more spatial and temporal kinematic variability, reduced interaction torques, a higher increase of heart rate and more muscle activity in the patient-cooperative path control mode with individually adjusted support than in the non-cooperative position control mode. In the compliant impedance control mode, spatial kinematic variability was increased and interaction torques were reduced, but temporal kinematic variability, heart rate and muscle activity were not significantly higher than in the position control mode. CONCLUSIONS: Patient-cooperative robot-aided gait training with free timing of movements made individuals with iSCI participate more actively and with larger kinematic variability than non-cooperative, position-controlled robot-aided gait training
Status and Prospects of Top-Quark Physics
The top quark is the heaviest elementary particle observed to date. Its large
mass of about 173 GeV/c^2 makes the top quark act differently than other
elementary fermions, as it decays before it hadronises, passing its spin
information on to its decay products. In addition, the top quark plays an
important role in higher-order loop corrections to standard model processes,
which makes the top quark mass a crucial parameter for precision tests of the
electroweak theory. The top quark is also a powerful probe for new phenomena
beyond the standard model. During the time of discovery at the Tevatron in 1995
only a few properties of the top quark could be measured. In recent years,
since the start of Tevatron Run II, the field of top-quark physics has changed
and entered a precision era. This report summarises the latest measurements and
studies of top-quark properties and gives prospects for future measurements at
the Large Hadron Collider (LHC).Comment: 76 pages, 35 figures, submitted to Progress in Particle and Nuclear
Physic
Measurement of the photon-jet production differential cross section in collisions at \sqrt{s}=1.96~\TeV
We present measurements of the differential cross section dsigma/dpT_gamma
for the inclusive production of a photon in association with a b-quark jet for
photons with rapidities |y_gamma|< 1.0 and 30<pT_gamma <300 GeV, as well as for
photons with 1.5<|y_gamma|< 2.5 and 30< pT_gamma <200 GeV, where pT_gamma is
the photon transverse momentum. The b-quark jets are required to have pT>15 GeV
and rapidity |y_jet| < 1.5. The results are based on data corresponding to an
integrated luminosity of 8.7 fb^-1, recorded with the D0 detector at the
Fermilab Tevatron Collider at sqrt(s)=1.96 TeV. The measured cross
sections are compared with next-to-leading order perturbative QCD calculations
using different sets of parton distribution functions as well as to predictions
based on the kT-factorization QCD approach, and those from the Sherpa and
Pythia Monte Carlo event generators.Comment: 10 pages, 9 figures, submitted to Phys. Lett.
- âŠ