126 research outputs found
Study and simulation of low rate video coding schemes
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design
Measuring blood flow and pro-inflammatory changes in the rabbit aorta
Atherosclerosis is a chronic inflammatory disease that develops as a consequence of progressive entrapment of low density lipoprotein, fibrous proteins and inflammatory cells in the arterial intima. Once triggered, a myriad of inflammatory and atherogenic factors mediate disease progression. However, the role of pro-inflammatory activity in the initiation of atherogenesis and its relation to altered mechanical stresses acting on the arterial wall is unclear. Estimation of wall shear stress (WSS) and the inflammatory mediator NF-κB is consequently useful. In this thesis novel ultrasound tools for accurate measurement of spatiotemporally varying 2D and 3D blood flow, with and without the use of contrast agents, have been developed. This allowed for the first time accurate, broad-view quantification of WSS around branches of the rabbit abdominal aorta. A thorough review of the evidence for a relationship between flow, NF-κB and disease was performed which highlighted discrepancies in the current literature and was used to guide the study design. Subsequently, methods for the measurement and colocalization of the spatial distribution of NF-κB, arterial permeability and nuclear morphology in the aorta of New Zealand White rabbits were developed. It was demonstrated that endothelial pro-inflammatory changes are spatially correlated with patterns of WSS, nuclear morphology and arterial permeability in vivo in the rabbit descending and abdominal aorta. The data are consistent with a causal chain between WSS, macromolecule uptake, inflammation and disease, and with the hypothesis that lipids are deposited first, through flow-mediated naturally occurring transmigration that, in excessive amounts, leads to subsequent inflammation and disease.Open Acces
Ray Tracing Gems
This book is a must-have for anyone serious about rendering in real time. With the announcement of new ray tracing APIs and hardware to support them, developers can easily create real-time applications with ray tracing as a core component. As ray tracing on the GPU becomes faster, it will play a more central role in real-time rendering. Ray Tracing Gems provides key building blocks for developers of games, architectural applications, visualizations, and more. Experts in rendering share their knowledge by explaining everything from nitty-gritty techniques that will improve any ray tracer to mastery of the new capabilities of current and future hardware. What you'll learn: The latest ray tracing techniques for developing real-time applications in multiple domains Guidance, advice, and best practices for rendering applications with Microsoft DirectX Raytracing (DXR) How to implement high-performance graphics for interactive visualizations, games, simulations, and more Who this book is for: Developers who are looking to leverage the latest APIs and GPU technology for real-time rendering and ray tracing Students looking to learn about best practices in these areas Enthusiasts who want to understand and experiment with their new GPU
Recommended from our members
Evaluation of Stereoscopic Camera Systems for Non-Intrusive Spatial Free Surface Measurement in Coastal Research Laboratories
Physical modelling is instrumental to the progression of coastal engineering research and our understanding of the offshore and nearshore environments. Scaled models are designed and built to be tested in coastal research laboratories, where a wave basin or flume generates the desired wave conditions for experimentation. The surrounding hydrodynamics of the research specimen are measured by deployed wave gauges that collect high-quality water surface elevation data. These instruments are constrained to measuring a single, stationary location and often require arrays of multiple gauges when it is necessary to collect surface elevation data at a high spatial resolution. Hydrodynamic disruption of the support framing and gauges can be significant, and the complete spatial variability in the wave field is still unknown regardless of the array size. When several reflective contours are introduced to a wave system, such as in port basins or with multiple specimen, spatial variability can be amplified and characterizing wave height with a single value is inadequate. The need for a non-intrusive spatial free surface measurement system that is capable of capturing this variability arises. This thesis evaluates two stereoscopic video measurement systems through multiple experiments in the Directional Wave Basin at the O.H. Hinsdale Wave Research Laboratory at Oregon State University. Both systems are tested in a variety of wave conditions and validated using traditional point measurement instruments such as resistive and ultrasonic wave gauges. Their ability to measure a 6 m by 6 m area in the basin is confirmed and further results are discussed.
Stereoscopic measurement requires the matching of pixels from multiple calibrated and synchronized video frames through image processing techniques. The positional difference between objects in each frame can then be used to calculate depth and obtain a three-dimensional point cloud of the cameras’ overlapping views. Stereo matching algorithms can have difficulty recognizing smooth, translucent water surfaces, therefore texturization or the addition of seeding material is often required for wave measurement within a laboratory. Seeding can disrupt instrumentation and wave propagation and may not always be feasible for use, so a system capable of non-intrusive free surface measurement was prioritized. Two existing stereo measurement systems were selected for this study: the Wave Acquisition Stereo System
(WASS), and the Intel® RealSense™ D455 Depth Camera. WASS is an open-source video post-processing pipeline which utilizes stereo matching algorithms developed for use in the open ocean. Multiple GoPro® camera pairs were calibrated and used for video collection for WASS. The Intel® RealSense™ is a consumer grade active infrared stereo camera that internally
processes and displays sensor data in real-time. It is a stand-alone system that provides hardware and software to the user so no additional equipment was needed. Methodologies were developed for both systems’ calibration, data synchronization, and post-processing. The camera positioning, lighting, and wave conditions were optimized in a series of preliminary experiments before each system was thoroughly tested in a variety of regular and irregular waves.
It was found that WASS heavily relied on wave breaking for water surface recognition and much of the data associated with unbroken waves was unusable. The Intel® system was capable of reliably capturing the water surface regardless of wave breaking. Both systems could not easily differentiate between objects and the water surface, hence instrumentation in view greatly affected data quality. Both datasets contained high frequency noise, but WASS produced lower quality data than that of the Intel® system. Therefore, the Intel® system’s dataset was selected for a more extensive spatial time and frequency domain analysis. Wave gauges in frame of the camera were used for measurement validation and the average error across all wave cases was 6.81% and 5.21% for mean and significant wave heights respectively. More importantly, the camera successfully captured a high spatial resolution of wave height variability that the array of point gauges were not able to measure. A BDM directionality analysis was completed on both the gauge and Intel® camera measurements to output directional wave spectra, which showed good agreement.
Both systems were able to spatially reconstruct the water surface of the basin without the need for seeding material. The Intel® system was easily deployed and more versatile in wave conditions without breaking, but its measurement distance is constrained to the geometries of the sensor housing. The WASS system was more difficult to use but its customizability allowed for hardware selection and the potential to measure a larger water surface area
Low Latency Rendering with Dataflow Architectures
The research presented in this thesis concerns latency in VR and synthetic environments. Latency is the end-to-end delay experienced by the user of an interactive computer system, between their physical actions and the perceived response to these actions. Latency is a product of the various processing, transport and buffering delays present in any current computer system. For many computer mediated applications, latency can be distracting, but it is not critical to the utility of the application. Synthetic environments on the other hand attempt to facilitate direct interaction with a digitised world. Direct interaction here implies the formation of a sensorimotor loop between the user and the digitised world - that is, the user makes predictions about how their actions affect the world, and see these predictions realised. By facilitating the formation of the this loop, the synthetic environment allows users to directly sense the digitised world, rather than the interface, and induce perceptions, such as that of the digital world existing as a distinct physical place. This has many applications for knowledge transfer and efficient interaction through the use of enhanced communication cues. The complication is, the formation of the sensorimotor loop that underpins this is highly dependent on the fidelity of the virtual stimuli, including latency. The main research questions we ask are how can the characteristics of dataflow computing be leveraged to improve the temporal fidelity of the visual stimuli, and what implications does this have on other aspects of the fidelity. Secondarily, we ask what effects latency itself has on user interaction. We test the effects of latency on physical interaction at levels previously hypothesized but unexplored. We also test for a previously unconsidered effect of latency on higher level cognitive functions. To do this, we create prototype image generators for interactive systems and virtual reality, using dataflow computing platforms. We integrate these into real interactive systems to gain practical experience of how the real perceptible benefits of alternative rendering approaches, but also what implications are when they are subject to the constraints of real systems. We quantify the differences of our systems compared with traditional systems using latency and objective image fidelity measures. We use our novel systems to perform user studies into the effects of latency. Our high performance apparatuses allow experimentation at latencies lower than previously tested in comparable studies. The low latency apparatuses are designed to minimise what is currently the largest delay in traditional rendering pipelines and we find that the approach is successful in this respect. Our 3D low latency apparatus achieves lower latencies and higher fidelities than traditional systems. The conditions under which it can do this are highly constrained however. We do not foresee dataflow computing shouldering the bulk of the rendering workload in the future but rather facilitating the augmentation of the traditional pipeline with a very high speed local loop. This may be an image distortion stage or otherwise. Our latency experiments revealed that many predictions about the effects of low latency should be re-evaluated and experimenting in this range requires great care
Single-Frequency Network Terrestrial Broadcasting with 5GNR Numerology
L'abstract è presente nell'allegato / the abstract is in the attachmen
A Unified Cognitive Model of Visual Filling-In Based on an Emergic Network Architecture
The Emergic Cognitive Model (ECM) is a unified computational model of visual filling-in based on the Emergic Network architecture. The Emergic Network was designed to help realize systems undergoing continuous change. In this thesis, eight different filling-in phenomena are demonstrated under a regime of continuous eye movement (and under static eye conditions as well).
ECM indirectly demonstrates the power of unification inherent with Emergic Networks when cognition is decomposed according to finer-grained functions supporting change. These can interact to raise additional emergent behaviours via cognitive re-use, hence the Emergic prefix throughout. Nevertheless, the model is robust and parameter free. Differential re-use occurs in the nature of model interaction with a particular testing paradigm.
ECM has a novel decomposition due to the requirements of handling motion and of supporting unified modelling via finer functional grains. The breadth of phenomenal behaviour covered is largely to lend credence to our novel decomposition.
The Emergic Network architecture is a hybrid between classical connectionism and classical computationalism that facilitates the construction of unified cognitive models. It helps cutting up of functionalism into finer-grains distributed over space (by harnessing massive recurrence) and over time (by harnessing continuous change), yet simplifies by using standard computer code to focus on the interaction of information flows. Thus while the structure of the network looks neurocentric, the dynamics are best understood in flowcentric terms. Surprisingly, dynamic system analysis (as usually understood) is not involved. An Emergic Network is engineered much like straightforward software or hardware systems that deal with continuously varying inputs. Ultimately, this thesis addresses the problem of reduction and induction over complex systems, and the Emergic Network architecture is merely a tool to assist in this epistemic endeavour.
ECM is strictly a sensory model and apart from perception, yet it is informed by phenomenology. It addresses the attribution problem of how much of a phenomenon is best explained at a sensory level of analysis, rather than at a perceptual one. As the causal information flows are stable under eye movement, we hypothesize that they are the locus of consciousness, howsoever it is ultimately realized
- …