Light-Field displays project hundreds of microparallax views for users to perceive 3D without wearing glasses. It results in gigantic bandwidth requirements if all views would be transmitted, even using conventional video compression per view. MPEG Immersive Video (MIV) follows a smarter strategy by transmitting only key images and some metadata to synthesize all the missing views. We developed (and will demonstrate) a realtime Depth Image Based Rendering software that follows this approach for synthesizing all Light-Field micro-parallax views from a couple of RGBD input views.demouinfo:eu-repo/semantics/inPres