3,791 research outputs found

    Geometry, evolution and scaling of fault relay zones in 3D using detailed observations from outcrops and 3D seismic data

    Get PDF
    A new surface attribute was developed during the course of the thesis, which enables fault-related deformation – specifically, the apparent dip of mapped horizons measured in a direction perpendicular to the average strike of a fault array (here termed “fault-normal rotation”, or “FNR”) – to be quantitatively analysed around imaged faults. The new utility can be applied to any 3D surface and was used to analyse centimetre-scale to kilometre-scale fault-arrays, interpreted from laser scan point clouds, digital elevation models, and 3D seismic datasets. In all studied examples, faults are surrounded by volumes of fault-related deformation that have variable widths, and which can consist of faults, fractures and continuous bed rotations (i.e. monoclines). The vertical component of displacement calculated from the areas of fault-related deformation on each horizon act to “fill-in” apparently missing displacements observed in fault throw profiles at fault overlaps. This result shows that complex 3D patterns of fault-related strain commonly develop during the geometrically coherent growth of a single fault-array. However, if the component of continuous deformation was not added to the throw profile, the fault-array could have been misinterpreted as a series of isolated fault segments with coincidental overlaps. The FNR attribute allows the detailed, quantitative analysis of fault linkage geometries. It is shown that overlapping fault tip lines in relay zones can link simultaneously at multiple points, which results in a segmented branch line. Fault linkage in relay zones is shown to control the amount of rotation accommodated by relay ramps on individual horizons, with open relay ramps having accommodated by larger rotations than breached relay ramps in the same relay zone. Displacements are therefore communicated between horizons in order to maintain strain compatibility within the relay zone. This result is used to predict fault linkage in the subsurface, along slip-aligned branch lines, from the along-strike displacement distributions at the earth’s surface. Relay zone aspect ratios (AR; overlap/separation) are documented to follow power-law scaling relationships over nine orders of magnitude with a mean AR of 4.2. Approximately one order of magnitude scatter in both separation and overlap exists at all scales. Up to half of this scatter can be attributed to the spread of measurements recorded from individual relay zones, which relates to the evolution of relay zone geometries as the displacements on the bounding faults increase. Mean relay AR is primarily controlled by the interactions between the stress field, of a nearby fault, and overlapping fault tips, rather than by the host rock lithology. At the Kilve and Lamberton study areas, mean ARs are 8.60 and 8.64 respectively, which are much higher than the global mean, 4.2. Scale-dependent factors, such as mechanical layering and heterogeneities at the fault tips are present at these locations, which modify how faults interact and produce relatively large overlap lengths for a given separation distance. Despite the modification to standard fault interaction models, these high AR relay zones are all geometrically coherent

    Selected growth and interaction characteristics of seafloor faults in the central Mississippi Canyon Offshore Continental Shelf (OCS) area, northern Gulf of Mexico

    Get PDF
    The characteristics of some shallow faults in the Gulf of Mexico interpreted to be active are poorly understood. A better understanding of these faults will increase our understanding of formerly and presently active geologic processes in the Gulf. Specifically, the characteristics of growth, interaction, and linkage of faults are of interest. Most of the Gulf has seen continuous clastic sediment deposition since the end of continental rifting in the middle Mesozoic. The Gulf is a tectonically quiescent basin, with the only major structural processes being salt diapirism and subsidence. Numerous styles of faulting have been observed in the Gulf, with each style being related to a specific type of deformation. Numerous authors have concluded that fault growth processes generally involve tipline propagation and linkage of faults. Evidence of these processes has been observed in seismic data sets. This investigation uses a HR 3-D seismic data set to characterize growth, interaction, and linkage of a fault set in the northern Gulf of Mexico. This work shows that linked and interacting faults are present in the study area. These conclusions were reached using measurements of throw on horizons offset by several faults and interpreting the throw data using a model of fault growth and interaction based on separate processes of growth by tipline propagation and growth by linkage of smaller faults. The ratio of these parameters for a fault population can be described by a power law relationship. For the fault set considered here, the power law was found to be valid

    WeighstEd

    Get PDF
    The purpose of this design thesis is to outline and describe the design project; WeighstEd. WeighstEd, is a data collection, storage, and analysis system for food waste to help Santa Clara University’s Sustainability Center reach a quantifiable food waste reduction goal of 10% by 2020 by using data to make informed cafeteria changes. The report will outline the entire engineering design process from ideation to manufacture including analysis techniques and benchmark testing. This report will serve as a written documentation of three mechanical engineers Senior Design Project completed at Santa Clara University. WeighstEd will be implemented at on campus events and in the university cafeteria beginning in the 2019-2020 school year

    Development of an image converter of radical design

    Get PDF
    A long term investigation of thin film sensors, monolithic photo-field effect transistors, and epitaxially diffused phototransistors and photodiodes to meet requirements to produce acceptable all solid state, electronically scanned imaging system, led to the production of an advanced engineering model camera which employs a 200,000 element phototransistor array (organized in a matrix of 400 rows by 500 columns) to secure resolution comparable to commercial television. The full investigation is described for the period July 1962 through July 1972, and covers the following broad topics in detail: (1) sensor monoliths; (2) fabrication technology; (3) functional theory; (4) system methodology; and (5) deployment profile. A summary of the work and conclusions are given, along with extensive schematic diagrams of the final solid state imaging system product

    Passively-coupled, low-coherence interferometric duct profiling with an astigmatism-corrected conical mirror

    Get PDF
    Duct-profiling in test samples up to 25 mm in diameter has been demonstrated using a passive, low-coherence probe head with a depth resolution of 7.8 μm, incorporating an optical-fibre-linked conical mirror addressed by a custom-built array of single-mode fibres. Zemax modelling, and experimental assessment of instrument performance, show that degradation of focus, resulting from astigmatism introduced by the conical mirror, is mitigated by the introduction of a novel lens element. This enables a good beam focus to be achieved at distances of tens of millimetres from the cone axis, not achievable when the cone is used alone. Incorporation of the additional lens element is shown to provide a four-fold improvement in lateral imaging resolution, when compared with reflection from the conical mirror alone

    Reconhecimento de ações em vídeos baseado na fusão de representações de ritmos visuais

    Get PDF
    Orientadores: Hélio Pedrini, David Menotti GomesTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Avanços nas tecnologias de captura e armazenamento de vídeos têm promovido uma grande demanda pelo reconhecimento automático de ações. O uso de câmeras para propó- sitos de segurança e vigilância tem aplicações em vários cenários, tais coomo aeroportos, parques, bancos, estações, estradas, hospitais, supermercados, indústrias, estádios, escolas. Uma dificuldade inerente ao problema é a complexidade da cena sob condições habituais de gravação, podendo conter fundo complexo e com movimento, múltiplas pes- soas na cena, interações com outros atores ou objetos e movimentos de câmera. Bases de dados mais recentes são construídas principalmente com gravações compartilhadas no YouTube e com trechos de filmes, situações em que não se restringem esses obstáculos. Outra dificuldade é o impacto da dimensão temporal, pois ela infla o tamanho dos da- dos, aumentando o custo computacional e o espaço de armazenamento. Neste trabalho, apresentamos uma metodologia de descrição de volumes utilizando a representação de Ritmos Visuais (VR). Esta técnica remodela o volume original do vídeo em uma imagem, em que se computam descritores bidimensionais. Investigamos diferentes estratégias para construção do ritmo visual, combinando configurações em diversos domínios de imagem e direções de varredura dos quadros. A partir disso, propomos dois métodos de extração de características originais, denominados Naïve Visual Rhythm (Naïve VR) e Visual Rhythm Trajectory Descriptor (VRTD). A primeira abordagem é a aplicação direta da técnica no volume de vídeo original, formando um descritor holístico que considera os eventos da ação como padrões e formatos na imagem de ritmo visual. A segunda variação foca na análise de pequenas vizinhanças obtidas a partir do processo das trajetórias densas, que permite que o algoritmo capture detalhes despercebidos pela descrição global. Testamos a nossa proposta em oito bases de dados públicas, sendo uma de gestos (SKIG), duas em primeira pessoa (DogCentric e JPL), e cinco em terceira pessoa (Weizmann, KTH, MuHAVi, UCF11 e HMDB51). Os resultados mostram que a técnica empregada é capaz de extrair elementos de movimento juntamente com informações de formato e de aparência, obtendo taxas de acurácia competitivas comparadas com o estado da arteAbstract: Advances in video acquisition and storage technologies have promoted a great demand for automatic recognition of actions. The use of cameras for security and surveillance purposes has applications in several scenarios, such as airports, parks, banks, stations, roads, hospitals, supermarkets, industries, stadiums, schools. An inherent difficulty of the problem is the complexity of the scene under usual recording conditions, which may contain complex background and motion, multiple people on the scene, interactions with other actors or objects, and camera motion. Most recent databases are built primarily with shared recordings on YouTube and with snippets of movies, situations where these obstacles are not restricted. Another difficulty is the impact of the temporal dimension since it expands the size of the data, increasing computational cost and storage space. In this work, we present a methodology of volume description using the Visual Rhythm (VR) representation. This technique reshapes the original volume of the video into an image, where two-dimensional descriptors are computed. We investigated different strategies for constructing the representation by combining configurations in several image domains and traversing directions of the video frames. From this, we propose two feature extraction methods, Naïve Visual Rhythm (Naïve VR) and Visual Rhythm Trajectory Descriptor (VRTD). The first approach is the straightforward application of the technique in the original video volume, forming a holistic descriptor that considers action events as patterns and formats in the visual rhythm image. The second variation focuses on the analysis of small neighborhoods obtained from the process of dense trajectories, which allows the algorithm to capture details unnoticed by the global description. We tested our methods in eight public databases, one of hand gestures (SKIG), two in first person (DogCentric and JPL), and five in third person (Weizmann, KTH, MuHAVi, UCF11 and HMDB51). The results show that the developed techniques are able to extract motion elements along with format and appearance information, achieving competitive accuracy rates compared to state-of-the-art action recognition approachesDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2015/03156-7FAPES

    Radiography of a normal fault system by 64,000 high-precision earthquake locations: The 2009 L’Aquila (central Italy) case study

    Get PDF
    We studied the anatomy of the fault system where the 2009 L’Aquila earthquake (MW 6.1) nucleated by means of ~64 k high-precision earthquake locations spanning 1 year. Data were analyzed by combining an automatic picking procedure for P and S waves, together with cross-correlation and double-difference location methods reaching a completeness magnitude for the catalogue equal to 0.7 including 425 clusters of similar earthquakes. The fault system is composed by two major faults: the high-angle L’Aquila fault and the listric Campotosto fault, both located in the first 10 km of the upper crust. We detect an extraordinary degree of detail in the anatomy of the single fault segments resembling the degree of complexity observed by field geologists on fault outcrops. We observe multiple antithetic and synthetic fault segments tens of meters long in both the hanging wall and footwall along with bends and cross fault intersections along the main fault and fault splays. The width of the L’Aquila fault zone varies along strike from 0.3 km where the fault exhibits the simplest geometry and experienced peaks in the slip distribution, up to 1.5 km at the fault tips with an increase in the geometrical complexity. These characteristics, similar to damage zone properties of natural faults, underline the key role of aftershocks in fault growth and co-seismic rupture propagation processes. Additionally, we interpret the persistent nucleation of similar events at the seismicity cutoff depth as the presence of a rheological (i.e., creeping) discontinuity explaining how normal faults detach at depth

    Proceedings of the 1977 NASA/ISHM Microelectronics Conference

    Get PDF
    Current and future requirements for research, development, manufacturing and education in the field of hybrid microelectronic technology were discussed

    Digital Image-Based Frameworks for Monitoring and Controlling of Particulate Systems

    Get PDF
    Particulate processes have been widely involved in various industries and most products in the chemical industry today are manufactured as particulates. Previous research and practise illustrate that the final product quality can be influenced by particle properties such as size and shape which are related to operating conditions. Online characterization of these particles is an important step for maintaining desired product quality in particulate processes. Image-based characterization method for the purpose of monitoring and control particulate processes is very promising and attractive. The development of a digital image-based framework, in the context of this research, can be envisioned in two parts. One is performing image analysis and designing advanced algorithms for segmentation and texture analysis. The other is formulating and implementing modern predictive tools to establish the correlations between the texture features and the particle characteristics. According to the extent of touching and overlapping between particles in images, two image analysis methods were developed and tested. For slight touching problems, image segmentation algorithms were developed by introducing Wavelet Transform de-noising and Fuzzy C-means Clustering detecting the touching regions, and by adopting the intensity and geometry characteristics of touching areas. Since individual particles can be identified through image segmentation, particle number, particle equivalent diameter, and size distribution were used as the features. For severe touching and overlapping problems, texture analysis was carried out through the estimation of wavelet energy signature and fractal dimension based on wavelet decomposition on the objects. Predictive models for monitoring and control for particulate processes were formulated and implemented. Building on the feature extraction properties of the wavelet decomposition, a projection technique such as principal component analysis (PCA) was used to detect off-specification conditions which generate particle mean size deviates the target value. Furthermore, linear and nonlinear predictive models based on partial least squares (PLS) and artificial neural networks (ANN) were formulated, implemented and tested on an experimental facility to predict particle characteristics (mean size and standard deviation) from the image texture analysis
    corecore