911 research outputs found

    Intelligent systems in manufacturing: current developments and future prospects

    Get PDF
    Global competition and rapidly changing customer requirements are demanding increasing changes in manufacturing environments. Enterprises are required to constantly redesign their products and continuously reconfigure their manufacturing systems. Traditional approaches to manufacturing systems do not fully satisfy this new situation. Many authors have proposed that artificial intelligence will bring the flexibility and efficiency needed by manufacturing systems. This paper is a review of artificial intelligence techniques used in manufacturing systems. The paper first defines the components of a simplified intelligent manufacturing systems (IMS), the different Artificial Intelligence (AI) techniques to be considered and then shows how these AI techniques are used for the components of IMS

    Reductie van het geheugengebruik van besturingssysteemkernen Memory Footprint Reduction for Operating System Kernels

    Get PDF
    In ingebedde systemen is er vaak maar een beperkte hoeveelheid geheugen beschikbaar. Daarom wordt er veel aandacht besteed aan het produceren van compacte programma's voor deze systemen, en zijn er allerhande technieken ontwikkeld die automatisch het geheugengebruik van programma's kunnen verkleinen. Tot nu toe richtten die technieken zich voornamelijk op de toepassingssoftware die op het systeem draait, en werd het besturingssysteem over het hoofd gezien. In dit proefschrift worden een aantal technieken beschreven die het mogelijk maken om op een geautomatiseerde manier het geheugengebruik van een besturingssysteemkern gevoelig te verkleinen. Daarbij wordt in eerste instantie gebruik gemaakt van compactietransformaties tijdens het linken. Als we de hardware en software waaruit het systeem samengesteld is kennen, is het mogelijk om nog verdere reducties te bekomen. Daartoe wordt de kern gespecialiseerd voor een bepaalde hardware-software combinatie. Overbodige functionaliteit wordt opgespoord en uit de kern verwijderd, terwijl de resterende functionaliteit wordt aangepast aan de specifieke gebruikspatronen die uit de hardware en software kunnen afgeleid worden. Als laatste worden technieken voorgesteld die het mogelijk maken om weinig of niet uitgevoerde code (bijvoorbeeld code voor het afhandelen van slechts zeldzaam optredende foutcondities) uit het geheugen te verwijderen. Deze code wordt dan enkel ingeladen op het moment dat ze effectief nodig is. Voor ons testsysteem kunnen we met de gecombineerde technieken het geheugengebruik van een Linux 2.4 kern met meer dan 48% verminderen

    Design of On-Chip Self-Testing Signature Register

    Get PDF
    Over the last few years, scan test has turn out to be too expensive to implement for industry standard designs due to increasing test data volume and test time. The test cost of a chip is mainly governed by the resource utilization of Automatic Test Equipment (ATE). Also, it directly depends upon test time that includes time required to load test program, to apply test vectors and to analyze generated test response of the chip. An issue of test time and data volume is increasingly appealing designers to use on-chip test data compactors, either on input side or output side or both. Such techniques significantly address the former issues but have little hold over increasing number of input-outputs under test mode. Further, test pins on DUT are increasing over the generations. Thus, scan channels on test floor are falling short in number for placement of such ICs. To address issues discussed above, we introduce an on-chip self-testing signature register. It comprises a response compactor and a comparator. The compactor compacts large chunk of response data to a small test signature whereas the comparator compares this test signature with desired one. The overall test result for the design is generated on single output pin. Being no storage of test response is demanded, the considerable reduction in ATE memory can be observed. Also, with only single pin to be monitored for test result, the number of tester channels and compare edges on ATE side significantly reduce at the end of the test. This cuts down maintenance and usage cost of test floor and increases its life time. Furthermore reduction in test pins gives scope for DFT engineers to increase number of scan chains so as to further reduce test time

    Seismic characterisation based on time-frequency spectral analysis

    Get PDF
    We present high-resolution time-frequency spectral analysis schemes to better resolve seismic images for the purpose of seismic and petroleum reservoir characterisation. Seismic characterisation is based on the physical properties of the Earth's subsurface media, and these properties are represented implicitly by seismic attributes. Because seismic traces originally presented in the time domain are non-stationary signals, for which the properties vary with time, we characterise those signals by obtaining seismic attributes which are also varying with time. Among the widely used attributes are spectral attributes calculated through time-frequency decomposition. Time-frequency spectral decomposition methods are employed to capture variations of a signal within the time-frequency domain. These decomposition methods generate a frequency vector at each time sample, referred to as the spectral component. The computed spectral component enables us to explore the additional frequency dimension which exists jointly with the original time dimension enabling localisation and characterisation of patterns within the seismic section. Conventional time-frequency decomposition methods include the continuous wavelet transform and the Wigner-Ville distribution. These methods suffer from challenges that hinder accurate interpretation when used for seismic interpretation. Continuous wavelet transform aims to decompose signals on a basis of elementary signals which have to be localised in time and frequency, but this method suffers from resolution and localisation limitations in the time-frequency spectrum. In addition to smearing, it often emerges from ill-localisation. The Wigner-Ville distribution distributes the energy of the signal over the two variables time and frequency and results in highly localised signal components. Yet, the method suffers from spurious cross-term interference due to its quadratic nature. This interference is misleading when the spectrum is used for interpretation purposes. For the specific application on seismic data the interference obscures geological features and distorts geophysical details. This thesis focuses on developing high fidelity and high-resolution time-frequency spectral decomposition methods as an extension to the existing conventional methods. These methods are then adopted as means to resolve seismic images for petroleum reservoirs. These methods are validated in terms of physics, robustness, and accurate energy localisation, using an extensive set of synthetic and real data sets including both carbonate and clastic reservoir settings. The novel contributions achieved in this thesis include developing time-frequency analysis algorithms for seismic data, allowing improved interpretation and accurate characterisation of petroleum reservoirs. The first algorithm established in this thesis is the Wigner-Ville distribution (WVD) with an additional masking filter. The standard WVD spectrum has high resolution but suffers the cross-term interference caused by multiple components in the signal. To suppress the cross-term interference, I designed a masking filter based on the spectrum of the smoothed-pseudo WVD (SP-WVD). The original SP-WVD incorporates smoothing filters in both time and frequency directions to suppress the cross-term interference, which reduces the resolution of the time-frequency spectrum. In order to overcome this side-effect, I used the SP-WVD spectrum as a reference to design a masking filter, and apply it to the standard WVD spectrum. Therefore, the mask-filtered WVD (MF-WVD) can preserve the high-resolution feature of the standard WVD while suppressing the cross-term interference as effectively as the SP-WVD. The second developed algorithm in this thesis is the synchrosqueezing wavelet transform (SWT) equipped with a directional filter. A transformation algorithm such as the continuous wavelet transform (CWT) might cause smearing in the time-frequency spectrum, i.e. the lack of localisation. The SWT attempts to improve the localisation of the time-frequency spectrum generated by the CWT. The real part of the complex SWT spectrum, after directional filtering, is capable to resolve the stratigraphic boundaries of thin layers within target reservoirs. In terms of seismic characterisation, I tested the high-resolution spectral results on a complex clastic reservoir interbedded with coal seams from the Ordos basin, northern China. I used the spectral results generated using the MF-WVD method to facilitate the interpretation of the sand distribution within the dataset. In another implementation I used the SWT spectral data results and the original seismic data together as the input to a deep convolutional neural network (dCNN), to track the horizons within a 3D volume. Using these application-based procedures, I have effectively extracted the spatial variation and the thickness of thinly layered sandstone in a coal-bearing reservoir. I also test the algorithm on a carbonate reservoir from the Tarim basin, western China. I used the spectrum generated by the synchrosqueezing wavelet transform equipped with directional filtering to characterise faults, karsts, and direct hydrocarbon indicators within the reservoir. Finally, I investigated pore-pressure prediction in carbonate layers. Pore-pressure variation generates subtle changes in the P-wave velocity of carbonate rocks. This suggests that existing empirical relations capable of predicting pore-pressure in clastic rocks are unsuitable for the prediction in carbonate rocks. I implemented the prediction based on the P-wave velocity and the wavelet transform multi-resolution analysis (WT-MRA). The WT-MRA method can unfold information within the frequency domain via decomposing the P-wave velocity. This enables us to extract and amplify hidden information embedded in the signal. Using Biot's theory, WT-MRA decomposition results can be divided into contributions from the pore-fluid and the rock framework. Therefore, I proposed a pore-pressure prediction model which is based on the pore-fluid contribution, calculated through WT-MRA, to the P-wave velocity.Open Acces

    Diamond Dicing

    Get PDF
    In OLAP, analysts often select an interesting sample of the data. For example, an analyst might focus on products bringing revenues of at least 100 000 dollars, or on shops having sales greater than 400 000 dollars. However, current systems do not allow the application of both of these thresholds simultaneously, selecting products and shops satisfying both thresholds. For such purposes, we introduce the diamond cube operator, filling a gap among existing data warehouse operations. Because of the interaction between dimensions the computation of diamond cubes is challenging. We compare and test various algorithms on large data sets of more than 100 million facts. We find that while it is possible to implement diamonds in SQL, it is inefficient. Indeed, our custom implementation can be a hundred times faster than popular database engines (including a row-store and a column-store).Comment: 29 page

    Fast online predictive compression of radio astronomy data

    Get PDF
    This report investigates the fast, lossless compression of 32-bit single precision floating-point values. High speed compression is critical in the context of the MeerKAT radio telescope currently under construction in Southern Africa and Australia, which will produce data at rates up to 1 Petabyte every 20 seconds. The compression technique being investigated is based on predictive compression, which has proven successful at achieving high-speed compression in previous research. Several different predictive techniques (which includes polynomial extrapolation), along with CPU- and GPU-based parallelization approaches are discussed. The implementation successfully achieves throughput rates in excess of 6 GiB/s for compression and much higher rates for decompression using a 64-core AMD Opteron machine, achieving file-size reductions of, on average 9%. Furthermore the results of concurrent investigations into block-based parallel Huffman encoding and Zero-length Encoding are compared to the predictive scheme and it was found that the predictive scheme obtains approximately 4%-5% better compression ratios than the Zero-Length Encoder and is 25 times faster than Huffman encoding on an Intel Xeon E5 processor. The scheme may be well-suited to address the large network bandwidth requirements of the MeerKAT project

    Early compaction history of marine siliciclastic sediments.

    Get PDF
    Differential compaction occurs within many sedimentary settings, such as alluvial and deltaic deposition, but it is within the submarine fan environment where the process is most effective due to the very high depositional porosities of the muds found there. Additionally the grain size of siliciclastic sediments within the submarine fan environment varies rapidly both horizontally and vertically, and hence the effect of differential compaction control on the depositional geometry and arrangement needs to be examined and modelled. It is also important to ascertain the rate at which sediments compact when buried, and whether compaction is complete at the end of deposition or whether it requires additional time to achieve this state. Sea- floor topography can be created if the latter case is true, and could influence subsequent deposition. Alternatively, if sea-floor topography is not created, the major control upon subsequent deposition may be the compatibility of the underlying section. Both controls will favour deposition of successive coarse clastic units above areas of fine-grained sediments, i.e. sand above shale rather than sand above sand. The Palaeocene sediments of the Central North Sea In the Montrose - Arbroath area (Blocks 22/17 and 22/18) combined with outcrop studies In southern California and New Mexico, have been used to assess the control of differential compaction on sediment distribution in a deep-sea fan setting. Differential compaction affects the Montrose - Arbroath area on a variety of scales. Firstly, differential compaction of the entire Palaeocene section across the underlying Forties - Montrose High induces structure. At a smaller scale, differential compaction may form a considerable control upon the spatial distribution of submarine fan channels and lobes that form the reservoir section throughout the area, and therefore the areal distribution of the oilfields themselves. Finally differential compaction may effect the distribution pattern of individual turbidites within such channel systems, thus forming a fine control upon the distribution of sands and shales within the reservoir. Fieldwork on submarine fan deposits in southern California has highlighted further complications to differential compaction that need to be addressed during the modelling process. Sedimentary processes such as basal loading and slumping are highly common in such deposits, and both can effect the compactional process to differing degrees. Results obtained from the modelling of stratal patterns observed in New Mexico provide information on the timing of differential compaction. It is suggested that compaction of sediments, even during early burial, requires a time interval often greater than the period of deposition, resulting in post-depositional compaction and the production of near-surface overpressure

    Integration of well data into dynamic reservoir interpretation using multiple seismic surveys

    Get PDF
    This thesis develops and tests a new technique which integrates information from well production and 4D seismic data directly in the data domain. This method is of value when seismic data are acquired by multiple surveys over the same area of a hydrocarbon reservoir. Sequences of 4D seismic changes can then be extracted over different time intervals from multiply repeated seismic surveys and these are cross correlated with identical time sequences of cumulative fluid volumes produced or injected from wells. The technique is applied to frequently repeated seismic surveys from three North Sea fields, including two compartmentalised reservoirs: the Schiehallion and Norne field, and a compacting reservoir: the Valhall field. Maps of well to seismic cross-correlations are proven to produce a strong, localised and stable signal in the connected neighbourhood of individual wells. The correlation signatures from the Schiehallion and Norne application investigated in this thesis are the consequence of pressure performance due to reservoir compartmentalisation. In the Schiehallion study, the mapped results help identify the production signal related only to individual wells, thus leading to a better delineation of reservoir compartments. In the Norne study in particular, an extra reservoir volume connected to the original segment is highlighted by the technique. The reservoir simulation model is subsequently updated and a better match between the observed and simulated data can be achieved. The application to the compacting Valhall field involves using data from the Life of Field Seismic project, for which the 4D signature is dominated by compaction-assisted pressure depletion. For these data, both AI and time-shift attributes are found to have a remarkably consistent correlation with the well activity for selected groups of wells. Further, maps of these results possess sufficient fine scale detail to resolve and disentangle interfering seismic responses generated by closely spaced wells and localised zones of gas breakout along long horizontal producers. These case studies indicate our proposed methodology of uniting well data and 4D seismic and confirm that this does indeed provide an insightful product for dynamic interpretation of the producing reservoir
    corecore