182 research outputs found

    An empirically derived system for high-speed shadow rendering

    Get PDF
    Shadows have captivated humanity since the dawn of time; with the current age being no exception – shadows are core to realism and ambience, be it to invoke a classic Baroque interplay of lights, darks and colours as the case in Rembrandt van Rijn’s Militia Company of Captain Frans Banning Cocq or to create a sense of mystery as found in film noir and expressionist cinematography. Shadows, in this traditional sense, are regions of blocked light – the combined effect of placing an object between a light source and surface. This dissertation focuses on real-time shadow generation as a subset of 3D computer graphics. Its main focus is the critical analysis of numerous real-time shadow rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shadows. This critical analysis allows us to assess the relationship between shadow rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Focusing on these bottleneck areas, we investigate several possibilities of improving the performance and quality of shadow rendering; both on a hardware and software level. Primary performance benefits are seen through effective culling, clipping, the use of hardware extensions and by managing the polygonal complexity and silhouette detection of shadow casting meshes. Additional performance gains are achieved by combining the depth-fail stencil shadow volume algorithm with dynamic spatial subdivision. Using this performance data gathered during the analysis of various shadow rendering algorithms, we are able to define a fuzzy logic-based expert system to control the real-time selection of shadow rendering algorithms based on environmental conditions. This system ensures the following: nearby shadows are always of high-quality, distant shadows are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    Advances in navigation and intraoperative imaging for intraoperative electron radiotherapy

    Get PDF
    Mención Internacional en el título de doctorEsta tesis se enmarca dentro del campo de la radioterapia y trata específicamente sobre la radioterapia intraoperatoria (RIO) con electrones. Esta técnica combina la resección quirúrgica de un tumor y la radiación terapéutica directamente aplicada sobre el lecho tumoral post-resección o sobre el tumor no resecado. El haz de electrones de alta energía es colimado y conducido por un aplicador específico acoplado a un acelerador lineal. La planificación de la RIO con electrones es compleja debido a las modificaciones geométricas y anatómicas producidas por la retracción de estructuras y la eliminación de tejidos cancerosos durante la cirugía. Actualmente, no se dispone del escenario real en este tipo de tratamientos (por ejemplo, la posición/orientación del aplicador respecto a la anatomía del paciente o las irregularidades en la superficie irradiada), sólo de una estimación grosso modo del tratamiento real administrado al paciente. Las imágenes intraoperatorias del escenario real durante el tratamiento (concretamente imágenes de tomografía axial computarizada [TAC]) serían útiles no sólo para la planificación intraoperatoria, sino también para registrar y evaluar el tratamiento administrado al paciente. Esta información es esencial en estudios prospectivos. En esta tesis se evaluó en primer lugar la viabilidad de un sistema de seguimiento óptico de varias cámaras para obtener la posición/orientación del aplicador en los escenarios de RIO con electrones. Los resultados mostraron un error de posición del aplicador inferior a 2 mm (error medio del centro del bisel) y un error de orientación menor de 2º (error medio del eje del bisel y del eje longitudinal del aplicador). Estos valores están dentro del rango propuesto por el Grupo de Trabajo 147 (encargo del Comité de Terapia y del Subcomité para la Mejora de la Garantía de Calidad y Resultados de la Asociación Americana de Físicos en Medicina [AAPM] para estudiar en radioterapia externa la exactitud de la localización con métodos no radiográficos, como los sistemas infrarrojos). Una limitación importante de la solución propuesta es que el aplicador se superpone a la imagen preoperatoria del paciente. Una imagen intraoperatoria proporcionaría información anatómica actualizada y permitiría estimar la distribución tridimensional de la dosis. El segundo estudio específico de esta tesis evaluó la viabilidad de adquirir con un TAC simulador imágenes TAC intraoperatorias de escenarios reales de RIO con electrones. No hubo complicaciones en la fase de transporte del paciente utilizando la camilla y su acople para el transporte, o con la adquisición de imágenes TAC intraoperatorias en la sala del TAC simulador. Los estudios intraoperatorios adquiridos se utilizaron para evaluar la mejora obtenida en la estimación de la distribución de dosis en comparación con la obtenida a partir de imágenes TAC preoperatorias, identificando el factor dominante en esas estimaciones (la región de aire y las irregularidades en la superficie, no las heterogeneidades de los tejidos). Por último, el tercer estudio específico se centró en la evaluación de varias tecnologías TAC de kilovoltaje, aparte del TAC simulador, para adquirir imágenes intraoperatorias con las que estimar la distribución de la dosis en RIO con electrones. Estos dispositivos serían necesarios en el caso de disponer de aceleradores lineales portátiles en el quirófano ya que no se aprobaría mover al paciente a la sala del TAC simulador. Los resultados con un maniquí abdominal mostraron que un TAC portátil (BodyTom) e incluso un acelerador lineal con un TAC de haz de cónico (TrueBeam) serían adecuados para este propósito.This thesis is framed within the field of radiotherapy, specifically intraoperative electron radiotherapy (IOERT). This technique combines surgical resection of a tumour and therapeutic radiation directly applied to a post-resection tumour bed or to an unresected tumour. The high-energy electron beam is collimated and conducted by a specific applicator docked to a linear accelerator (LINAC). Dosimetry planning for IOERT is challenging owing to the geometrical and anatomical modifications produced by the retraction of structures and removal of cancerous tissues during the surgery. No data of the actual IOERT 3D scenario is available (for example, the applicator pose in relation to the patient’s anatomy or the irregularities in the irradiated surface) and consequently only a rough approximation of the actual IOERT treatment administered to the patient can be estimated. Intraoperative computed tomography (CT) images of the actual scenario during the treatment would be useful not only for intraoperative planning but also for registering and evaluating the treatment administered to the patient. This information is essential for prospective trials. In this thesis, the feasibility of using a multi-camera optical tracking system to obtain the applicator pose in IOERT scenarios was firstly assessed. Results showed that the accuracy of the applicator pose was below 2 mm in position (mean error of the bevel centre) and 2º in orientation (mean error of the bevel axis and the longitudinal axis), which are within the acceptable range proposed in the recommendation of Task Group 147 (commissioned by the Therapy Committee and the Quality Assurance and Outcomes Improvement Subcommittee of the American Association of Physicists in Medicine [AAPM] to study the localization accuracy with non-radiographic methods such as infrared systems in external beam radiation therapy). An important limitation of this solution is that the actual pose of applicator is superimposed on a patient’s preoperative image. An intraoperative image would provide updated anatomical information and would allow estimating the 3D dose distribution. The second specific study of this thesis evaluated the feasibility of acquiring intraoperative CT images with a CT simulator in real IOERT scenarios. There were no complications in the whole procedure related to the transport step using the subtable and its stretcher or the acquisition of intraoperative CT images in the CT simulator room. The acquired intraoperative studies were used to evaluate the improvement achieved in the dose distribution estimation when compared to that obtained from preoperative CT images, identifying the dominant factor in those estimations (air gap and the surface irregularities, not tissue heterogeneities). Finally, the last specific study focused on assessing several kilovoltage (kV) CT technologies other than CT simulators to acquire intraoperative images for estimating IOERT dose distribution. That would be necessary when a mobile electron LINAC was available in the operating room as transferring the patient to the CT simulator room could not be approved. Our results with an abdominal phantom revealed that a portable CT (BodyTom) and even a LINAC with on-board kV cone-beam CT (TrueBeam) would be suitable for this purpose.Programa Oficial de Doctorado en Multimedia y ComunicacionesPresidente: Joaquín López Herráiz.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Óscar Acosta Tamay

    Real Time Extraction of Human Gait Features for Recognition

    Get PDF
    Human motion analysis has received a great attention from researchers in the last decade due to its potential use in different applications such as automated visual surveillance. This field of research focuses on human activities, including people identification. Human gait is a new biometric indicator in visual surveillance system. It can recognize individuals as the way they walk. In the walking process, the human body shows regular periodic variation, such as upper and lower limbs, knee point, thigh point, stride parameters (stride length, Cadence, gait cycle), height, etc. This reflects the individual’s unique movement pattern. In gait recognition, detection of moving people from a video is important for feature extraction. Height is one of the important features from the several gait features which is not influenced by the camera performance, distance and clothing style of the subject. Detection of people in video streams is the first relevant step of information and background subtraction is a very popular approach for foreground segmentation. In this thesis, different background subtraction methods have been simulated to overcome the problem of illumination variation, repetitive motions from background clutter, shadows, long term scene changes and camouflage. But background subtraction lacks capability to remove shadows. So different shadows detection methods have been tried out using RGB, YCbCr, and HSV color components to suppress shadows. These methods have been simulated and quantitative performance evaluated on different indoor video sequence. Then the research on shadow model has been extended to optimize the threshold values of HSV color space for shadow suppression with respect to the average intensity of local shadow region. A mathematical model is developed between the average intensity and the threshold values.Further a new method is proposed here to calculate the variation of height during walking. The measurement of height of a person is not affected by his clothing style as well as the distance from the camera. At any distance the height can be measured, but for that camera calibration is essential. DLT method is used to find the height of a moving person for each frame using intrinsic as well as extrinsic parameters. Another parameter known as stride, function of height, is extracted using bounding box technique. As human walking style is periodic so the accumulation of height and stride parameter will give a periodic signal. Human identification is done by using theses parameters. The height variation and stride variation signals are sampled to get further analyzed using DCT (Discrete Cosine Transformation), DFT (Discrete Fourier Transformation), and DHT (Discrete Heartily Transformation) techniques. N - harmonics are selected from the transformation coefficients. These coefficients are known as feature vectors which are stored in the database. Euclidian distance and MSE are calculated on these feature vectors. When feature vectors of same subject are compared, then a maximum value of MSE is selected, known as Self-Recognition Threshold (SRT). Its value is different for different transformation techniques. It is used to identify individuals. Again we have discussed on Model based method to detect the thigh angle. But thigh angle of one leg can’t be detected over a period of walking. Because one leg is occluded by the other leg. So stride parameter is used to estimate the thigh angle

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    A practical vision system for the detection of moving objects

    Get PDF
    The main goal of this thesis is to review and offer robust and efficient algorithms for the detection (or the segmentation) of foreground objects in indoor and outdoor scenes using colour image sequences captured by a stationary camera. For this purpose, the block diagram of a simple vision system is offered in Chapter 2. First this block diagram gives the idea of a precise order of blocks and their tasks, which should be performed to detect moving foreground objects. Second, a check mark () on the top right corner of a block indicates that this thesis contains a review of the most recent algorithms and/or some relevant research about it. In many computer vision applications, segmenting and extraction of moving objects in video sequences is an essential task. Background subtraction has been widely used for this purpose as the first step. In this work, a review of the efficiency of a number of important background subtraction and modelling algorithms, along with their major features, are presented. In addition, two background approaches are offered. The first approach is a Pixel-based technique whereas the second one works at object level. For each approach, three algorithms are presented. They are called Selective Update Using Non-Foreground Pixels of the Input Image , Selective Update Using Temporal Averaging and Selective Update Using Temporal Median , respectively in this thesis. The first approach has some deficiencies, which makes it incapable to produce a correct dynamic background. Three methods of the second approach use an invariant colour filter and a suitable motion tracking technique, which selectively exclude foreground objects (or blobs) from the background frames. The difference between the three algorithms of the second approach is in updating process of the background pixels. It is shown that the Selective Update Using Temporal Median method produces the correct background image for each input frame. Representing foreground regions using their boundaries is also an important task. Thus, an appropriate RLE contour tracing algorithm has been implemented for this purpose. However, after the thresholding process, the boundaries of foreground regions often have jagged appearances. Thus, foreground regions may not correctly be recognised reliably due to their corrupted boundaries. A very efficient boundary smoothing method based on the RLE data is proposed in Chapter 7. It just smoothes the external and internal boundaries of foreground objects and does not distort the silhouettes of foreground objects. As a result, it is very fast and does not blur the image. Finally, the goal of this thesis has been presenting simple, practical and efficient algorithms with little constraints which can run in real time

    Utilizing Fluorescent Nanoscale Particles to Create a Map of the Electric Double Layer

    Get PDF
    The interactions between charged particles in solution and an applied electric field follow several models, most notably the Gouy-Chapman-Stern model, for the establishment of an electric double layer along the electrode, but these models make several assumptions of ionic concentrations and an infinite bulk solution. As more scientific progress is made for the finite and single molecule reactions inside microfluidic cells, the limitations of the models become more extreme. Thus, creating an accurate map of the precise response of charged nanoparticles in an electric field becomes increasingly vital. Another compounding factor is Brownian motion’s inverse relationship with size: large easily observable particles have relatively small Brownian movements, while nanoscale particles are simultaneously more difficult to be observed directly and have much larger magnitude Brownian movements. The research presented here tackles both cases simultaneously using fluorescently tagged, negatively charged, 20 nm diameter polystyrene nanoparticles. By utilizing parallel plate electrodes within a specially constructed microfluidic device that limits the z-direction, the nanoparticle movements become restricted to two dimensions. By using one axis to measure purely Brownian motion, while the other axis has both Brownian motion and ballistic movement from the applied electric field, the ballistic component can be disentangled and isolated. Using this terminal velocity to calculate the direct effect of the field on a single nanoparticle, as opposed to the reaction of the bulk solution, several curious phenomena were observed: the trajectory of the nanoparticle suggests that the charge time of the electrode is several magnitudes larger than the theoretical value, lasting for over a minute instead of tens of milliseconds. Additionally, the effective electric field does not reduce to below the Brownian limit, but instead has a continued influence for far longer than the model suggests. Finally, when the electrode was toggled off, a repeatable response was observed where the nanoparticle would immediately alter course in the opposite direction of the previously established field, rebounding with a high degree of force for several seconds after the potential had been cut before settling to a neutral and stochastic Brownian motion. While some initial hypotheses are presented in this dissertation as possible explanations, these findings indicate the need for additional experiments to find the root cause of these unexpected results and observations

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now
    corecore