587 research outputs found

    Gapped consensus motif discovery: evaluation of a new algorithm based on local multiple alignments and a sampling strategy

    Get PDF
    We check the efficiency and faisability of a novel method designed for the discovery of a priori unknown motifs described as gaps alternating with specific regions. Such motifs are searched for as consensi of non homologous biological sequences. The only specifications required concern the maximal gap length, the minimal frequency for specific characters and the minimal percentage (quorum) of sequences sharing the motif. Our method is based on a cooperation between a multiple alignment method for a quick detection of local similarities and a sampling strategy running candidate position specific scoring matrices to convergence. This rather original way implemented for converging to the solution proves efficient both on simulated data, gapped instances of the so-called challenge problem, promoter sites in Dicot plants and transcription factor binding sites in E.Coli. Our algorithm compares favorably with the MEME and STARS approaches in terms of accuracy

    Modeling a Priori Information on the Velocity Field in Reflection Tomography

    Get PDF
    International audienceReflection tomography consists in determining a velocity field and reflector geometries from traveltimes picked on multioffset seismic section. The solution of the tomographic inverse problem being underdetermined, we need to integrate a priori information on the model. A regularization by means of model curvature has in general no physical justification and leads to geologically incorrect models. This paper presents a formulation that integrates in a realistic way a priori geological information associated with the regularity of the model and with the relation between the velocity distribution and the interface geometries. Such an integration seems particularly critical when dealing with smooth velocity models in which velocity and interfaces are defined independently from each other. We validate the interest of our formulation on a real data example

    Quantifying relevant uncertainties on the solution model of reflection tomography

    Get PDF
    International audienc

    Flexible b-spline model parameterization designed for reflection tomography

    Get PDF
    International audienceReflection tomography is an efficient method to determine a subsurface velocity model that best fits the traveltime data associated to the main events picked on the seismic sections. A careful choice of the model representation has to be done: a blocky model representation based on regular gridded b-spline functions has been proposed. This flexible parameterization allows accurate and robust inversion but can lead to a huge number of parameters. An adaptive parameterization that enables to account for local complexities and inhomogeneous ray coverage is considered

    Quantifying uncertainties on the solution model of seismic tomography

    Get PDF
    International audienceReflection tomography allows the determination of a propagation velocity model that fits the traveltime data associated with reflections of seismic waves in the subsurface. A least-squares formulation is used to compare the observed traveltimes and the traveltimes computed by the forward operator based on a ray tracing. The solution of this inverse problem is only one among many possible models. A linearized a posteriori analysis is then crucial to quantify the range of admissible models we can obtain from these data and the a priori information. The contribution of this paper is to propose a formalism which allows us to compute uncertainties on relevant geological quantities for a reduced computational time. Nevertheless, this approach is only valid in the vicinity of the solution model (linearized framework), complex cases may thus require a nonlinear approach. Application on a 2D real data set illustrates the linearized approach to quantify uncertainties on the solution of seismic tomography. Finally, the limitations of this approach are discussed

    B44 Adapted Nonlinear Optimization Method for Production Data and 4D Seismic Inversion

    Get PDF
    International audienceIntegrated inversion of production history data and 4D seismic data for reservoir model characterization leads to a nonlinear inverse problem that is usually cumbersome to solve : the associated forward problem based, on one hand, on fluid flow simulation in the reservoir for production data modeling, and on the other hand, on a petro-elastic model for 4D time lapse seismic data modeling, is usually computationally time consuming, the number of measurements to be inverted is large (up to 500 000), the number of model parameters to be determined is up to 100. Moreover, all the derivatives of the modeled data with respect to those parameters are usually not available. We propose an optimization method based on a Sequential Quadratic Programming algorithm which uses gradient approximation coupled with a BFGS approximation of the Hessian. In addition, the proposed method allows to handle equality and inequality nonlinear constraints. Some realistic applications are presented to illustrate the efficiency of the method

    Smooth velocity models in reflection tomography for imaging complex geological structures.

    Get PDF
    International audienc

    MOD 3.2 3D reflection tomography designed for complex structures

    Get PDF
    International audienceS u m m a r y A 3D reflection tomography that can determine correct subsurface velocity structures is of strategic importance for an effective use of 3D prestack depth migration. We have developed a robust and fast 3D reflection tomography that is designed to handle complex models. We use a B-spline representation for interface geometries and for the lateral velocity distribution within a layer and we restrict the vertical velocity variation to have a constant gradient. We solve the ray tracing problem by use of a bending method with a circular ray approximation within layers. For the inversion we use a regularized formulation of reflection tomography which penalizes the roughness of the model. The optimization is based on a quadratic programming formulation and constraints on the model are treated by the augmented Lagrangian technique. We show results of ray tracing and inversion on a rather complex synthetic model. Introduction Ehinger and Lailly, 1995, have shown the interest of reflection tomography for computing velocity models adequate for the seismic imaging of complex geologic structures. In 2D, reflection tomography has proved its effectiveness in this context (Jacobs et al., 1995). In 3D, Guiziou et al., 199 1, have developed a ray tracing based on a straight line ray approximation within a layer and an inversion of poststack data. But it suffers of a non derivability of its traveltime formula due to the Gocad interface representation. We describe a 3D tomography that handles models with the necessary derivability and allows inversion of more complex kinematics by the use of a more accurate traveltime calculation. Model description We choose a blocky model representation of the subsurface, each layer being associated with a geological macrosequence. A velocity law has to be associated with each layer (Figure 1). The form of the velocity law is = y) + where is the lateral velocity distribution (described by cubic B-spline functions) and k is the vertical velocity gradient. Using blocky models can lead to difficulties associated with the possible non-definition of the forward problem (situations where there is no ray joining a source to a receiver) and more generally to all kind of difficulties involved in discontinuous kinematics. The blocky model representation allows velocity discontinuities as they exist in the earth and thus to straightforwardly integrate a priori information on velocities (see Lailly and Sinoquet, 1996, for a general discussion on blocky versus smooth model s for seismic imaging of complex geologic structures). We use a cubic B-spline representation for interface geometries. This Fig

    Impact hydrochimique d'une ballastière en eau sur les eaux souterraines

    Get PDF
    La notion d'impact d'une ballastière en eau sur la qualité des eaux souterraines comporte deux aspects : impact naturel et impact artificiel dû à une pollution accidentelle. L'évaluation de ces deux types d'impact a été menée pour deux ballastières de l'aquifère rhénan alsacien. Dans ce but, la qualité des eaux des deux ballastières profondes - l'une abandonnée, l'autre exploitée - du site pilote expérimental de la Wantzenau, au nord de Strasbourg, a fait l'objet d'une surveillance particulière. Des échantillons d'eau ont également été prélevés dans des piézomètres situés en amont et en aval immédiats des ballastières. Aucun des deux étangs ne constitue une source de dégradation de la qualité des eaux souterraines aval, dont une partie est captée à des fins d'alimentation en eau potable à 300 mètres des berges aval de ces ballastières. Le risque de la pollution accidentelle d'une nappe par ballastière interposée demeure au contraire un problème préoccupant. Une modélisation mathématique des échanges hydrochimiques entre nappe et ballastières a alors été mise en oeuvre, sur le cas de la Wantzenau. L'étalonnage préalable du modèle utilisé a été réalisé pour le traceur chlorure. La réussite de cette première étape a autorisé la simulation de divers cas de contaminations "fictives" de la qualité des eaux captées pour l'alimentation en eau potable.A gravel pit can exert two different types of hydrochemical impacts on downstream groundwaters : a nature impact and an artificial impact caused by accidental pollution.The first impact was studied in the case of gravel pits of the Rhenan Alsatian aquifer. The water quality of two deep graves pits (one of them is still exploited) was closely observed in the experimental field at la Wantzenau (north eastern France, north of Strasbourg).The artificial impact was simulated with a mathematical modelisation. Water samples of gravel pits and upstream and downstream groundwaters were analysed. As a result, the presence of the gravel pits does not after the downstream groundwater quality. In particular, no influence was observed on the quality of water pumped at a station situated 300 meters away from the gravel pits.On the contrary, the accidental pollution of an aquifer - from or through - a gravel pit remains quite a problem. A mathematical modelisation of the hydrochemical exchanges between the aquifer and the gravel pits was set up. The preliminary adjustment of the model was carried out on the propagation of the chloride ion, which behaves as a natural tracer. The issue of this first step authorized a simulation of different cases of fictitious chemical contamination of the water pumped at the station

    Velocity model determination by the SMART method, Part 2: Application SP3.8

    Get PDF
    International audienceThe SMART (Sequential Migration Aided Reflection Tomography) method, as explained in the first part of this paper, starts after a first set of traveltimes in the unmigrated prestack data has been picked and the inventarization of useful a priori knowledge related to these traveltimes has been made. Thereto a preparative phase is needed. First a global estimate of the subsurface structure is made. Hereto we use the standard stacking and poststack interpretation procedures which 'allow for getting insight in the degree of complexity of the subsurface. Next the traveltimes can be picked. When interpreting prestack data important qualitative structural information in difficult target zones (e-g. fault zones or salt structure flanks) can be obtained. Such an analysis guides the interpreter in selecting and picking the best traveltimes of primary events. Once the preparation is finished the SMART method can be applied for a detailed determination of a structural and velocity model in a very consistent way. It is emphasized that velocity variations in complex structures can be determined accurately by prestack traveltime inversion techniques. This phase has an iterative character. In order to update the velocity model after the first iteration additional traveltimes are needed. Next additional traveltimes are obtained by interpretation of the cube of migrated data which can be easier than in the time domain due to the focussing and positioning effect of the migration process. By tracing rays in the same velocity model as was used for mi.gration on the newly interpreted events, we will obtain additional traveltimes which will make the set of input data for the next iteration of tomography more complete. A new velocity model is calculated and the data are remigrated. In this paper we will demonstrate the feasibility of this approach using a 2D real data set. We executed a number of iterations of the SMART method and ended up with of the complex structure. a very satisfactory depth image THE DATA We used for this application a 2D dataset covering a salt structure. It consists of 300 shotrecords at a regular interval of 40m. The acquisition was done in a split spread. The half spread length is 1920 meters with 48 geophones. The data were delivered with a standard preprocessing (filtering, zero-phase deconvolution and muting). Because of some clearly visible groundroll, we applied a second filter in order to remove most of this in Figure 1. low frequency noise. A partial stack of the data is shown THE PREPARATIVE PHASE Analysis of complexity In order to get an idea of the degree of complexity of a subsurface, it is useful to construct several partial stacks with the same stacking velocity model. Because the stacking process is based on flattening of the hyperbola's in CMP's, through some NMO and DMO based correction, differences in between the partial stacks demonstrate the failure of the process. In areas with complex subsurface structures these hyperbola's aren't necessarily flat due to different raypaths left and right of the midpoint. In this dataset this phenomenon can be observed in a series of CMP's covering the saltdome (See Figure 2). Another way to get an idea of the complexity is to do a post stack depth migration by a layer stripping approach using the best partial stack. For these data the results are satisfactory for the sedimentary zones left and right of the dome, but are incorrect for the deep interfaces and the base of the salt. This is partially due to events that are lost during the stacking procedure. Other causes for this failure are: the uncertainty in picking the right interface that serves as the next velocity boundary and the difficult choice of the velocities which becomes more and more hazardous as the depth increases. The final result is unreliable and the resulting depth for the base of the salt depends largely on the choices made by the interpreter Clearly these data cannot be handled by standard processing techniques. Left and right of the salt dome and below it the nature of the trace gathers is too complex. A prestack imaging method using a velocity model computed by tomography seems adequate for solving the aforementioned problems. Data preparation for the SMART method The next step after the analysis of the complexity is the data preparation for the SMART method. Its goal is to prepare an initial set of traveltimes to be used in the first iteration. We split this phase in a number of consecutive sub-phases: • Creating a initial set of guides for the prestack interpretation. • Picking traveltimes. • Quality control of the traveltimes. • Selection of representative traveltimes and calculation of the associated weights. Creating a set of guides. Guides are indicators for the interpreter suggesting where to look in the prestack unmigrated data for a certain event. They are also warnings for complicated situations as multiples, triplications and situations were no reliable indications for the nature of an event is available. The geologic guides are qualitative (e.g. presence of a fault) or quantitative (e.g. the depth of horizon A is 2500m). The geophysical guides are for example the presence of multiples or diffractions. They are derived from the unstacked or stacked data. For this dataset the following data were used: a set of (partial) stacks, time-and depth-migrated stacks and the cube of preprocessed prestack data. It allowed us to determine the zones where picking traveltimes directly in the unmigrated data could lead to incorrect traveltime information for the tomography. These zones are indicated in Figure 1 (Za and Zb, a zone with triplications and a series of unexplained events. Picking the first set of traveltimes Using the guides the picking of the traveltimes can start. This is done in the cube of unmigrated data. There is no preference for picking in a specific trace gather. This depends of the available guide. When it is a geological one the common offset gathers are most suited. Using a geophysical one the interpretation is done in the shotgathers or the common midpoint gathers. Whatever direction is chosen, one has to end 142
    • …
    corecore