524 research outputs found

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1

    ИНТЕЛЛЕКТУАЛЬНЫЙ числовым программным ДЛЯ MIMD-компьютер

    Get PDF
    For most scientific and engineering problems simulated on computers the solving of problems of the computational mathematics with approximately given initial data constitutes an intermediate or a final stage. Basic problems of the computational mathematics include the investigating and solving of linear algebraic systems, evaluating of eigenvalues and eigenvectors of matrices, the solving of systems of non-linear equations, numerical integration of initial- value problems for systems of ordinary differential equations.Для більшості наукових та інженерних задач моделювання на ЕОМ рішення задач обчислювальної математики з наближено заданими вихідними даними складає проміжний або остаточний етап. Основні проблеми обчислювальної математики відносяться дослідження і рішення лінійних алгебраїчних систем оцінки власних значень і власних векторів матриць, рішення систем нелінійних рівнянь, чисельного інтегрування початково задач для систем звичайних диференціальних рівнянь.Для большинства научных и инженерных задач моделирования на ЭВМ решение задач вычислительной математики с приближенно заданным исходным данным составляет промежуточный или окончательный этап. Основные проблемы вычислительной математики относятся исследования и решения линейных алгебраических систем оценки собственных значений и собственных векторов матриц, решение систем нелинейных уравнений, численного интегрирования начально задач для систем обыкновенных дифференциальных уравнений

    High-resolution imaging methods in array signal processing

    Get PDF

    Near-field blast vibration monitoring and analysis for prediction of blast damage in sublevel open stoping

    Get PDF
    The work presented in this thesis investigates near-field blast vibration monitoring, analysis, interpretation and blast damage prediction in sublevel open stoping geometries. As part of the investigation, seven stopes at two Australian sublevel open stoping mines were used as case studies. The seven stopes represented significant ranges in stope shapes, sizes, geotechnical concerns, extraction sequences, stress conditions, blasting geometries and rock mass properties.The blast damage investigations at the two mine sites had three main components. The first component was rock mass characterisation, which was performed using static intact rock testing results, discontinuity mapping, mining-induced static stress modelling and geophysical wave propagation approaches. The rock mass characterisation techniques identified localised and large-scale variations in rock mass properties and wave propagation behaviours in relation to specified monitoring orientations and mining areas. The other components of the blast damage investigations were blast vibration monitoring and analysis of production blasting in the seven stopes and stope performance assessments.The mine-based data collection period for the case studies lasted from January, 2006 to February, 2008. A key element of the data collection program was near-field blast vibration monitoring of production blasts within the seven study stopes. The instrumentation program consisted of 41 tri-axial accelerometers and geophone sondes, installed at distances from 4m to 16m from the stope perimeters. A total of 59 production firings were monitored over the course of the blast vibration monitoring program. The monitoring program resulted in a data set of over 5000 single-hole blast vibration waveforms, representing two different blasthole diameters (89mm and 102mm), six different explosive formulations and a wide range in charge weights, source to sensor distances, blasthole orientations and blasting geometries.The data collected in the blast vibration monitoring program were used to compare various near-field charge weight scaling relationships such as Scaled Distance and Holmberg-Persson prediction models. The results of these analyses identified that no single charge weight scaling model could dependably predict the measured near-field peak amplitudes for complex blasting geometries. Therefore, the general form of the charge weight scaling relationship was adopted in conjunction with nonlinear multivariable estimation techniques to analyse the data collected in the study stopes and to perform forward vibration predictions for the case studies.Observed variations in the recorded near-field waveforms identified that instantaneous peak amplitude such as peak particle velocity (PPV) did not accurately describe the characteristics of a large portion of the data. This was due to significant variations in frequency spectra, variable distributions of energy throughout the wave durations and coupling of wave types (e.g. P- and S-wave coupling). The wave properties that have been proposed to more accurately characterise complex nearfield vibrations are the total wave energy density (ED[subscript]W-tot), stored strain energy density (ED[subscript]W-SS) and the wave-induced mean normal dynamic strain (ε[subscript]W-MN). These wave properties consider the activity of the blast-induce wave at a point in the rock mass over the entire duration instead of the instantaneous amplitude.A new analytical approach has been proposed to predict blast-induced rock mass damage using rock mass characterisation data, blast vibration monitoring results and rock fracture criteria. The two-component approach separately predicts the extent of blast-induced damage through fresh fracturing of intact rock and the extent from discontinuity extension. Two separate damage criteria are proposed for the intact rock portion of the rock mass based on tensile and compressive fracture strain energy densities and compressive and tensile fracture strains. The single criterion for extension of existing discontinuities is based on the required fracture energy density to activate all macro-fractures in a unit volume of the rock mass.The proposed energy-based criteria for intact rock fracture and extension of discontinuities integrate strain rate effects in relation to material strength. The strainbased criterion for intact rock fracture integrates the existing mining-induced static strain magnitudes. These factors have not been explicitly considered in existing empirical or analytical blast damage prediction models. The proposed blast damage prediction approach has been applied to two stopes during the two mine site case studies

    Advancements in Measuring and Modeling the Mechanical and Hydrological Properties of Snow and Firn: Multi-sensor Analysis, Integration, and Algorithm Development

    Get PDF
    Estimating snow mechanical properties – such as elastic modulus, stiffness, and strength – is important for understanding how effectively a vehicle can travel over snow-covered terrain. Vehicle instrumentation data and observations of the snowpack are valuable for improving the estimates of winter vehicle performance. Combining in-situ and remotely-sensed snow observations, driver input, and vehicle performance sensors requires several techniques of data integration. I explored correlations between measurements spanning from millimeter to meter scales, beginning with the SnowMicroPenetrometer (SMP) and instruments applied to snow that were designed for measuring the load bearing capacity and the compressive and shear strengths of roads and soils. The spatial distribution of snow’s mechanical properties is still largely unknown. From this initial work, I determined that snow density remains a useful proxy for snowpack strength. To measure snow density, I applied multi-sensor electromagnetic methods. Using spatially distributed snowpack, terrain, and vegetation information developed in the subsequent chapters, I developed an over-snow vehicle performance model. To measure the vehicle performance, I joined driver and vehicle data in the coined Normalized Difference Mobility Index (NDMI). Then, I applied regression methods to distribute NDMI from spatial snow, terrain, and vegetation properties. Mobility prediction is useful for the strategic advancement of warfighting in cold regions. The security of water resources is climatologically inequitable and water stress causes international conflict. Water resources derived from snow are essential for modern societies in climates where snow is the predominant source of precipitation, such as the western United States. Snow water equivalent (SWE) is a critical parameter for yearly water supply forecasting and can be calculated by multiplying the snow depth by the snow density. In this work, I combined high-spatial resolution light detection and ranging (LiDAR) measured snow depths with ground-penetrating radar (GPR) measurements of two-way travel-time (TWT) to solve for snow density. Then using LiDAR derived terrain and vegetation features as predictors in a multiple linear regression, the density observations are distributed across the SnowEx 2020 study area at Grand Mesa, Colorado. The modeled density resolved detailed patterns that agree with the known interactions of snow with wind, terrain, and vegetation. The integration of radar and LiDAR sensors shows promise as a technique for estimating SWE across entire river basins and evaluating observational- or physics-based snow-density models. Accurate estimation of SWE is a means of water security. In our changing climate, snow and ice mass are being permanently lost from the cryosphere. Mass balance is an indicator of the (in)stability of glaciers and ice sheets. Surface mass balance (SMB) may be estimated by multiplying the thickness of any annual snowpack layer by its density. Though, unlike applications in seasonal snowpack, the ages of annual firn layers are unknown. To estimate SMB, I modeled the firn depth, density, and age using empirical and numerical approaches. The annual SMB history shows cyclical patterns representing the combination of atmospheric, oceanic, and anthropogenic climate forcing, which may serve as evaluation or assimilation data in climate model retrievals of SMB. The advancements made using the SMP, multi-channel GPR arrays, and airborne LiDAR and radar within this dissertation have made it possible to spatially estimate the snow depth, density, and water equivalent in seasonal snow, glaciers, and ice sheets. Open access, process automation, repeatability, and accuracy were key design parameters of the analyses and algorithms developed within this work. The many different campaigns, objectives, and outcomes composing this research documented the successes and limitations of multi-sensor estimation techniques for a broad range of cryosphere applications

    Time-domain Compressive Beamforming for Medical Ultrasound Imaging

    Get PDF
    Over the past 10 years, Compressive Sensing has gained a lot of visibility from the medical imaging research community. The most compelling feature for the use of Compressive Sensing is its ability to perform perfect reconstructions of under-sampled signals using l1-minimization. Of course, that counter-intuitive feature has a cost. The lacking information is compensated for by a priori knowledge of the signal under certain mathematical conditions. This technology is currently used in some commercial MRI scanners to increase the acquisition rate hence decreasing discomfort for the patient while increasing patient turnover. For echography, the applications could go from fast 3D echocardiography to simplified, cheaper echography systems. Real-time ultrasound imaging scanners have been available for nearly 50 years. During these 50 years of existence, much has changed in their architecture, electronics, and technologies. However one component remains present: the beamformer. From analog beamformers to software beamformers, the technology has evolved and brought much diversity to the world of beam formation. Currently, most commercial scanners use several focalized ultrasonic pulses to probe tissue. The time between two consecutive focalized pulses is not compressible, limiting the frame rate. Indeed, one must wait for a pulse to propagate back and forth from the probe to the deepest point imaged before firing a new pulse. In this work, we propose to outline the development of a novel software beamforming technique that uses Compressive Sensing. Time-domain Compressive Beamforming (t-CBF) uses computational models and regularization to reconstruct de-cluttered ultrasound images. One of the main features of t-CBF is its use of only one transmit wave to insonify the tissue. Single-wave imaging brings high frame rates to the modality, for example allowing a physician to see precisely the movements of the heart walls or valves during a heart cycle. t-CBF takes into account the geometry of the probe as well as its physical parameters to improve resolution and attenuate artifacts commonly seen in single-wave imaging such as side lobes. In this thesis, we define a mathematical framework for the beamforming of ultrasonic data compatible with Compressive Sensing. Then, we investigate its capabilities on simple simulations in terms of resolution and super-resolution. Finally, we adapt t-CBF to real-life ultrasonic data. In particular, we reconstruct 2D cardiac images at a frame rate 100-fold higher than typical values
    corecore