378,122 research outputs found

    Estimation of depth fields suitable for video compression using 3-D structures and motion of objects

    Get PDF
    Cataloged from PDF version of article.Intensity prediction along motion trajectories removes temporal redundancy considerably in video compression algorithms. In threedimensional (3-D) object-based video coding, both 3-D motion and depth values are required for temporal prediction. The required 3-D motion parameters for each object are found by the correspondence-based Ematrix method. The estimation of the correspondences—two-dimensional (2-D) motion field—between the frames and segmentation of the scene into objects are achieved simultaneously by minimizing a Gibbs energy. The depth field is estimated by jointly minimizing a defined distortion and bitrate criterion using the 3-D motion parameters. The resulting depth field is efficient in the rate-distortion sense. Bit-rate values corresponding to the lossless encoding of the resultant depth fields are obtained using predictive coding; prediction errors are encoded by a Lempel–Ziv algorithm. The results are satisfactory for real-life video scenes

    Motion estimation using optical flow field

    Get PDF
    Over the last decade, many low-level vision algorithms have been devised for extracting depth from intensity images. Most of them are based on motion of the rigid observer. Translation and rotation are constants with respect to space coordinates. When multi-objects move and/or the objects change shape, the algorithms cannot be used. In this dissertation, we develop a new robust framework for the determination of dense 3-D position and motion fields from a stereo image sequence. The framework is based on unified optical flow field (UOFF). In the UOFF approach, a four frame mode is used to compute six dense 3-D position and velocity fields. Their accuracy depends on the accuracy of optical flow field computation. The approach can estimate rigid and/or nonrigid motion as well as observer and/or object(s) motion. Here, a novel approach to optical flow field computation is developed. The approach is named as correlation-feedback approach. It has three different features from any other existing approaches. They are feedback, rubber window, and special refinement. With those three features, error is reduced, boundary is conserved, subpixel estimation accuracy is increased, and the system is robust. Convergence of the algorithm is proved in general. Since the UOFF is based on each pixel, it is sensitive to noise or uncertainty at each pixel. In order to improve its performance, we applied two Kalman filters. Our analysis indicates that different image areas need different convergence rates, for instance. the areas along boundaries have faster convergence rate than an interior area. The first Kalman filter is developed to conserve moving boundary in optical How determination by applying needed nonhomogeneous iterations. The second Kalman filter is devised to compute 3-D motion and structure based on a stereo image sequence. Since multi-object motion is allowed, newly visible areas may be exposed in images. How to detect and handle the newly visible areas is addressed. The system and measurement noise covariance matrices, Q and R, in the two Kalman filters are analyzed in detail. Numerous experiments demonstrate the efficiency of our approach

    Model based estimation of image depth and displacement

    Get PDF
    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided

    Decomposition And Particle Motion Of The Acoustic Dipole Log In Anisotropic Formation

    Get PDF
    For linear wave propagation in anisotropic media, the principle of superposition still holds. The decomposition of the acoustic dipole log is based on this principle. In the forward decomposition inline and crossline acoustic dipole logs at any azimuthal angle the projection of measurements is along the principal direction of the formation. In the inverse decomposition the measurements along the principal direction can be constructed from the orthogonal pair of inline and crossline acoustic dipole log. The analytic formulas for both forward and inverse decompositions of the dipole laaa sss-saog are derived in this paper. The inverse decomposition formula is the solution in the least-square sense. Numerical examples are demonstrated for the acoustic dipole log decomposition in isotropic and anisotropic formations. The synthetic dipole log is calculated by the 3-D finite difference method. The numerical examples also show that the inverse decomposition formula works very well with noisy data. This inverse decomposition formula will be useful to process the field acoustic logging data in anisotropic formations. It can provide the direction of the formation anisotropy as well as the degree of anisotropy. Because acoustic dipole logging is in the near field distance, the particle motion is complicated. The particle motion is linearly polarized only in the principle direction. The initial particle motion with a dipole source at an arbitrary azimuthal angle tends to point in the fast shear wave direction. However, it will be difficult to use this information to find a stable estimation of a fast shear wave direction.Massachusetts Institute of Technology. Borehole Acoustics and Logging ConsortiumERL/nCUBE Geophysical Center for Parallel Processin

    Automatic video segmentation employing object/camera modeling techniques

    Get PDF
    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not reflected in the technical system, making it difficult to manipulate the video at the object level. The realization of object-based manipulation will introduce many new possibilities for working with videos like composing new scenes from pre-existing video objects or enabling user-interaction with the scene. Moreover, object-based video compression, as defined in the MPEG-4 standard, can provide high compression ratios because the foreground objects can be sent independently from the background. In the case that the scene background is static, the background views can even be combined into a large panoramic sprite image, from which the current camera view is extracted. This results in a higher compression ratio since the sprite image for each scene only has to be sent once. A prerequisite for employing object-based video processing is automatic (or at least user-assisted semi-automatic) segmentation of the input video into semantic units, the video objects. This segmentation is a difficult problem because the computer does not have the vast amount of pre-knowledge that humans subconsciously use for object detection. Thus, even the simple definition of the desired output of a segmentation system is difficult. The subject of this thesis is to provide algorithms for segmentation that are applicable to common video material and that are computationally efficient. The thesis is conceptually separated into three parts. In Part I, an automatic segmentation system for general video content is described in detail. Part II introduces object models as a tool to incorporate userdefined knowledge about the objects to be extracted into the segmentation process. Part III concentrates on the modeling of camera motion in order to relate the observed camera motion to real-world camera parameters. The segmentation system that is described in Part I is based on a background-subtraction technique. The pure background image that is required for this technique is synthesized from the input video itself. Sequences that contain rotational camera motion can also be processed since the camera motion is estimated and the input images are aligned into a panoramic scene-background. This approach is fully compatible to the MPEG-4 video-encoding framework, such that the segmentation system can be easily combined with an object-based MPEG-4 video codec. After an introduction to the theory of projective geometry in Chapter 2, which is required for the derivation of camera-motion models, the estimation of camera motion is discussed in Chapters 3 and 4. It is important that the camera-motion estimation is not influenced by foreground object motion. At the same time, the estimation should provide accurate motion parameters such that all input frames can be combined seamlessly into a background image. The core motion estimation is based on a feature-based approach where the motion parameters are determined with a robust-estimation algorithm (RANSAC) in order to distinguish the camera motion from simultaneously visible object motion. Our experiments showed that the robustness of the original RANSAC algorithm in practice does not reach the theoretically predicted performance. An analysis of the problem has revealed that this is caused by numerical instabilities that can be significantly reduced by a modification that we describe in Chapter 4. The synthetization of static-background images is discussed in Chapter 5. In particular, we present a new algorithm for the removal of the foreground objects from the background image such that a pure scene background remains. The proposed algorithm is optimized to synthesize the background even for difficult scenes in which the background is only visible for short periods of time. The problem is solved by clustering the image content for each region over time, such that each cluster comprises static content. Furthermore, it is exploited that the times, in which foreground objects appear in an image region, are similar to the corresponding times of neighboring image areas. The reconstructed background could be used directly as the sprite image in an MPEG-4 video coder. However, we have discovered that the counterintuitive approach of splitting the background into several independent parts can reduce the overall amount of data. In the case of general camera motion, the construction of a single sprite image is even impossible. In Chapter 6, a multi-sprite partitioning algorithm is presented, which separates the video sequence into a number of segments, for which independent sprites are synthesized. The partitioning is computed in such a way that the total area of the resulting sprites is minimized, while simultaneously satisfying additional constraints. These include a limited sprite-buffer size at the decoder, and the restriction that the image resolution in the sprite should never fall below the input-image resolution. The described multisprite approach is fully compatible to the MPEG-4 standard, but provides three advantages. First, any arbitrary rotational camera motion can be processed. Second, the coding-cost for transmitting the sprite images is lower, and finally, the quality of the decoded sprite images is better than in previously proposed sprite-generation algorithms. Segmentation masks for the foreground objects are computed with a change-detection algorithm that compares the pure background image with the input images. A special effect that occurs in the change detection is the problem of image misregistration. Since the change detection compares co-located image pixels in the camera-motion compensated images, a small error in the motion estimation can introduce segmentation errors because non-corresponding pixels are compared. We approach this problem in Chapter 7 by integrating risk-maps into the segmentation algorithm that identify pixels for which misregistration would probably result in errors. For these image areas, the change-detection algorithm is modified to disregard the difference values for the pixels marked in the risk-map. This modification significantly reduces the number of false object detections in fine-textured image areas. The algorithmic building-blocks described above can be combined into a segmentation system in various ways, depending on whether camera motion has to be considered or whether real-time execution is required. These different systems and example applications are discussed in Chapter 8. Part II of the thesis extends the described segmentation system to consider object models in the analysis. Object models allow the user to specify which objects should be extracted from the video. In Chapters 9 and 10, a graph-based object model is presented in which the features of the main object regions are summarized in the graph nodes, and the spatial relations between these regions are expressed with the graph edges. The segmentation algorithm is extended by an object-detection algorithm that searches the input image for the user-defined object model. We provide two objectdetection algorithms. The first one is specific for cartoon sequences and uses an efficient sub-graph matching algorithm, whereas the second processes natural video sequences. With the object-model extension, the segmentation system can be controlled to extract individual objects, even if the input sequence comprises many objects. Chapter 11 proposes an alternative approach to incorporate object models into a segmentation algorithm. The chapter describes a semi-automatic segmentation algorithm, in which the user coarsely marks the object and the computer refines this to the exact object boundary. Afterwards, the object is tracked automatically through the sequence. In this algorithm, the object model is defined as the texture along the object contour. This texture is extracted in the first frame and then used during the object tracking to localize the original object. The core of the algorithm uses a graph representation of the image and a newly developed algorithm for computing shortest circular-paths in planar graphs. The proposed algorithm is faster than the currently known algorithms for this problem, and it can also be applied to many alternative problems like shape matching. Part III of the thesis elaborates on different techniques to derive information about the physical 3-D world from the camera motion. In the segmentation system, we employ camera-motion estimation, but the obtained parameters have no direct physical meaning. Chapter 12 discusses an extension to the camera-motion estimation to factorize the motion parameters into physically meaningful parameters (rotation angles, focal-length) using camera autocalibration techniques. The speciality of the algorithm is that it can process camera motion that spans several sprites by employing the above multi-sprite technique. Consequently, the algorithm can be applied to arbitrary rotational camera motion. For the analysis of video sequences, it is often required to determine and follow the position of the objects. Clearly, the object position in image coordinates provides little information if the viewing direction of the camera is not known. Chapter 13 provides a new algorithm to deduce the transformation between the image coordinates and the real-world coordinates for the special application of sport-video analysis. In sport videos, the camera view can be derived from markings on the playing field. For this reason, we employ a model of the playing field that describes the arrangement of lines. After detecting significant lines in the input image, a combinatorial search is carried out to establish correspondences between lines in the input image and lines in the model. The algorithm requires no information about the specific color of the playing field and it is very robust to occlusions or poor lighting conditions. Moreover, the algorithm is generic in the sense that it can be applied to any type of sport by simply exchanging the model of the playing field. In Chapter 14, we again consider panoramic background images and particularly focus ib their visualization. Apart from the planar backgroundsprites discussed previously, a frequently-used visualization technique for panoramic images are projections onto a cylinder surface which is unwrapped into a rectangular image. However, the disadvantage of this approach is that the viewer has no good orientation in the panoramic image because he looks into all directions at the same time. In order to provide a more intuitive presentation of wide-angle views, we have developed a visualization technique specialized for the case of indoor environments. We present an algorithm to determine the 3-D shape of the room in which the image was captured, or, more generally, to compute a complete floor plan if several panoramic images captured in each of the rooms are provided. Based on the obtained 3-D geometry, a graphical model of the rooms is constructed, where the walls are displayed with textures that are extracted from the panoramic images. This representation enables to conduct virtual walk-throughs in the reconstructed room and therefore, provides a better orientation for the user. Summarizing, we can conclude that all segmentation techniques employ some definition of foreground objects. These definitions are either explicit, using object models like in Part II of this thesis, or they are implicitly defined like in the background synthetization in Part I. The results of this thesis show that implicit descriptions, which extract their definition from video content, work well when the sequence is long enough to extract this information reliably. However, high-level semantics are difficult to integrate into the segmentation approaches that are based on implicit models. Intead, those semantics should be added as postprocessing steps. On the other hand, explicit object models apply semantic pre-knowledge at early stages of the segmentation. Moreover, they can be applied to short video sequences or even still pictures since no background model has to be extracted from the video. The definition of a general object-modeling technique that is widely applicable and that also enables an accurate segmentation remains an important yet challenging problem for further research

    A Variational Method for Scene Flow Estimation from Stereo Sequences

    Get PDF
    This report presents a method for scene flow estimation from a calibrated stereo image sequence. The scene flow contains the 3-D displacement field of scene points, so that the 2-D optical flow can be seen as a projection of the scene flow onto the images. We propose to recover the scene flow by coupling the optical flow estimation in both cameras with dense stereo matching between the images, thus reducing the number of unknowns per image point. The use of a variational framework allows us to properly handle discontinuities in the observed surfaces and in the 3-D displacement field. Moreover our approach handles occlusions both for the optical flow and the stereo. We obtain a partial differential equations system coupling both the optical flow and the stereo, which is numerically solved using an original multi-resolution algorithm. Whereas previous variational methods were estimating the 3-D reconstruction at time t and the scene flow separately, our method jointly estimates both in a single optimization. We present numerical results on synthetic data with ground truth information, and we also compare the accuracy of the scene flow projected in one camera with a state-of-the-art single-camera optical flow computation method. Results are also presented on a real stereo sequence with large motion and stereo discontinuities

    Sampling Models in Light Fields

    Get PDF
    What is the actual information contained in light rays filling the 3-D world? Leonardo da Vinci saw the world as an infinite number of radiant pyramids caused by the objects located in it. Nowadays, the radiant pyramid is usually described as a set of light rays with various directions passing through a given point. By recording light rays at every point in space, all the information in a scene can be fully acquired. This work focuses on the analysis of the sampling models of a light field camera, a device dedicated to recording the amount of light traveling through any point along any direction in the 3-D world. In contrast to the conventional photography which only records a 2-D projection of the scene, such camera captures both the geometry information and material properties of a scene by recording 2-D angular data for each point in a 2-D spatial domain. This 4-D data is referred to as the light field. The main goal of this thesis is to utilize this 4-D data from one or multiple light field cameras based on the proposed sampling models for recovering the given scene. We first propose a novel algorithm to recover the depth information from the light field. Based on the analysis of the sampling model, we map the high dimensional light field data to a low dimensional texture signal in the continuous domain modulated by the geometric structure of the scene. We formulate the depth estimation problem as a signal recovery problem with samples at unknown locations. A practical framework is proposed to recover alternately the texture signal and the depth map. We thus acquire not only the depth map with high accuracy but also a compact representation of the light field in the continuous domain. The proposed algorithm performs especially well for scenes with fine geometric structure while also achieving state-of-the-art performance on public data-sets. Secondly, we consider multiple light fields to increase the amount of information captured from the 3-D world. We derive a motion model of the light field camera from the proposed sampling model. Given this motion model, we can extend the field of view to create light field panoramas and perform light-field super-resolution. This can help overcome the shortcoming of limited sensor resolution in current light field cameras. Finally, we propose a novel image based rendering framework to represent light rays in the 3-D space: the circular light field. The circular light field is acquired by taking photos from a circular camera array facing outwards from the center of the rig. We propose a practical framework to capture, register and stitch multiple circular light fields. The information presented in multiple circular light fields allows the creation of any virtual camera view at any chosen location with a 360-degree field of view. The new representation of the light rays can be used to generate high quality contents for virtual reality and augmented reality

    Kinematic description of soft tissue artifacts: quantifying rigid versus deformation components and their relation with bone motion

    Full text link
    [EN] This paper proposes a kinematic approach for describing soft tissue artifacts (STA) in human movement analysis. Artifacts are represented as the field of relative displacements of markers with respect to the bone. This field has two components: deformation component (symmetric field) and rigid motion (skew-symmetric field). Only the skew-symmetric component propagates as an error to the joint variables, whereas the deformation component is filtered in the kinematic analysis process. Finally, a simple technique is proposed for analyzing the sources of variability to determine which part of the artifact may be modeled as an effect of the motion, and which part is due to other sources. This method has been applied to the analysis of the shank movement induced by vertical vibration in 10 subjects. The results show that the cluster deformation is very small with respect to the rigid component. Moreover, both components show a strong relationship with the movement of the tibia. These results suggest that artifacts can be modeled effectively as a systematic relative rigid movement of the marker cluster with respect to the underlying bone. This may be useful for assessing the potential effectiveness of the usual strategies for compensating for STA. © 2012 International Federation for Medical and Biological Engineering.This work has been funded by the Spanish Government and co-financed by EU FEDER funds (Grants DPI2009-13830-C02-01, DPI2009-13830-C02-02 and IMPIVA IMDEEA/2012/79 and IMDEEA/2012/80).De Rosario Martínez, H.; Page Del Pozo, AF.; Besa Gonzálvez, AJ.; Mata Amela, V.; Conejero Navarro, E. (2012). Kinematic description of soft tissue artifacts: quantifying rigid versus deformation components and their relation with bone motion. Medical & Biological Engineering & Computing. 50(11):1173-1181. https://doi.org/10.1007/s11517-012-0978-5S117311815011Akbarshahi M, Schache AG, Fernandez JW, Baker R, Banks S, Pandy MG (2010) Non-invasive assessment of soft-tissue artifact and its effect on knee joint kinematics during functional activity. J Biomech 43:1292–1301Alexander EJ, Andriacchi TP (2001) Correcting for deformation in skin-based marker systems. J Biomech 34:355–361Andersen MS, Benoit DL, Damsgaard M, Ramsey DK, Rasmussen J (2010) Do kinematic models reduce the effects of soft tissue artefacts in skin marker-based motion analysis? An in vivo study of knee kinematics. J Biomech 43:268–273Andriacchi TP, Alexander EJ, Toney MK, Dyrby C, Sum J (1998) A point cluster method for in vivo motion analysis: applied to a study of knee kinematics. J Biomech Eng 120:743–749Benoit DL, Ramsey DK, Lamontagne M, Xu L, Wretenberg P, Renström P (2006) Effect of skin movement artifact on knee kinematics during gait and cutting motions measured in vivo. Gait Posture 24:152–164Camomilla V, Donati M, Stagni R, Cappozzo A (2009) Non-invasive assessment of superficial soft tissue local displacement during movement: a feasibility study. J Biomech 42:931–937Cappello A, Cappozzo A, La Palombara PF, Lucchetti L, Leardini A (1997) Multiple anatomical landmark calibration for optimal bone pose estimation. Hum Mov Sci 16:259–274Cappello A, Stagni R, Fantozzi S, Leardini A (2005) Soft tissue artifact compensation in knee kinematics by double anatomical landmark calibration: performance of a novel method during select motor tasks. IEEE Trans Biomed Eng 52:992–998Cappozzo A, Della Croce U, Leardini A, Chiari L (2005) Human movement analysis using stereophotogrammetry: part 1: theoretical background. Gait Posture 21:186–196Chèze L, Fregly BJ, Dimnet J (1995) A solidification procedure to facilitate kinematic analyses based on video system data. J Biomech 28:879–884Dumas R, Cheze L (2009) Soft tissue artifact compensation by linear 3D interpolation and approximation methods. J Biomech 42:2214–2217Ehrig RM, Taylor WR, Duda GN, Heller MO (2006) A survey of formal methods for determining the centre of rotation of ball joints. J Biomech 39:2798–2809Ehrig RM, Taylor WR, Duda GN, Heller MO (2007) A survey of formal methods for determining functional joint axes. J Biomech 40:2150–2157Fuller J, Liu LJ, Murphy MC, Mann RW (1997) A comparison of lower-extremity skeletal kinematics measured using skin- and pin-mounted markers. Hum Mov Sci 16:219–242Gao B, Zheng N (2008) Investigation of soft tissue movement during level walking: translations and rotations of skin markers. J Biomech 41:3189–3195Holden JP, Orsini JA, Siegel KL, Kepple TM, Gerber LH, Stanhope SJ (1997) Surface movement errors in shank kinematics and knee kinetics during gait. Gait Posture 5:217–227Leardini A, Chiari L, Della Croce U, Cappozzo A (2005) Human movement analysis using stereo photogrammetry: part 3. Soft tissue artifact assessment and compensation. Gait Posture 21:212–225Lucchetti L, Cappozzo A, Cappello A, Della Croce U (1998) Skin movement artefact assessment and compensation in the estimation of knee-joint kinematics. J Biomech 31:977–984Nester C, Jones RK, Liu A, Howard D, Lundberg A, Arndt A, Lundgren P, Stacoff A, Wolf P (2007) Foot kinematics during walking measured using bone and surface mounted markers. J Biomech 40:3412–3423Page A, de Rosario H, Mata V, Hoyos JV, Porcar R (2006) Effect of marker cluster design on the accuracy of human movement analysis using stereophotogrammetry. Med Biol Eng Comput 4:1113–1119Page A, de Rosario H, Mata V, Atienza C (2009) Experimental Analysis of Rigid Body Motion. A Vector Method to Determine Finite and Infinitesimal Displacements From Point Coordinates. J Mech Des 131: 031005Page A, Galvez JA, de Rosario H, Mata V, Prat J (2010) Optimal average path of the instantaneous helical axis in planar motions with one functional degree of freedom. J Biomech 43:375–378Peters A, Galna B, Sangeux M, Morris M, Baker R (2010) Quantification of soft tissue artifact in lower limb human motion analysis: a systematic review. Gait Posture 31:1–8Reinschmidt C, van den Bogert AJ, Lundberg A, Nigg BM, Murphy N, Stacoff A, Stano A (1997) Tibiofemoral and tibiocalcaneal motion during walking: external vs. skeletal markers. Gait Posture 6:98–109Ryu T, Choi HS, Chung MK (2009) Soft tissue artifact compensation using displacement dependency between anatomical landmarks and skin markers- a preliminary study. Int J Ind Ergon 39:152–158Sangeux M, Marin F, Charleux F, Dürselen L, Ho Ba Tho MC (2006) Quantification of the 3D relative movement of external marker sets vs. bones based on magnetic resonance imaging. Clin Biomech 21:984–991Sati M, de Guise JA, Larouche S, Drouin G (1996) Quantitative assessment of skin-bone movement at the knee. Knee 3(3):121–138Stagni R, Fantozzi S (2009) Can cluster deformation be an indicator of soft tissue artefact? Gait Posture 30(Suppl 1):S55Stagni R, Fantozzi S, Cappello A, Leardini A (2005) Quantification of soft tissue artefact in motion analysis by combining 3D fluoroscopy and stereophotogrammetry: a study on two subjects. Clin Biomech 20(3):320–329Stagni R, Fantozzi S, Cappello A (2009) Double calibration vs global optimization: performance and effectiveness for clinical application. Gait Posture 29:119–122Taylor WR, Ehrig RM, Duda GN, Schell H, Seebeck P, Heller MO (2005) On the influence of soft tissue coverage in the determination of bone kinematics using skin markers. J Orthop Res 23(4):726–734Tranberg R, Karlsson D (1998) The relative skin movement of the foot: a 2-D roentgen photogrammetry study. Clin Biomech 13(1):71–76Tsai T-Y, Lu Tung-Wu, Kuo M-Y, Lin C–C (2011) Effects of soft tissue artifacts on the calculated kinematics and kinetics of the knee during stair-ascent. J Biomech 44(6):1182–1188Woltring HJ, Long K, Osterbauer PJ, Fuhr AW (1994) Instantaneous helical axis estimation from 3-D video data in neck kinematics for whiplash diagnostics. J Biomech 27(12):1415–143

    Joint Blind Motion Deblurring and Depth Estimation of Light Field

    Full text link
    Removing camera motion blur from a single light field is a challenging task since it is highly ill-posed inverse problem. The problem becomes even worse when blur kernel varies spatially due to scene depth variation and high-order camera motion. In this paper, we propose a novel algorithm to estimate all blur model variables jointly, including latent sub-aperture image, camera motion, and scene depth from the blurred 4D light field. Exploiting multi-view nature of a light field relieves the inverse property of the optimization by utilizing strong depth cues and multi-view blur observation. The proposed joint estimation achieves high quality light field deblurring and depth estimation simultaneously under arbitrary 6-DOF camera motion and unconstrained scene depth. Intensive experiment on real and synthetic blurred light field confirms that the proposed algorithm outperforms the state-of-the-art light field deblurring and depth estimation methods
    • …
    corecore