2,229 research outputs found

    Integration of Groundwater Flow Modeling and GIS

    Get PDF

    Development of an efficient solver for detailed kinetics in reactive flows

    Get PDF
    The use of chemical kinetic mechanisms in CAE tools for reactive flow simulations is of high importance for studying and predicting pollutant formation. However, usage of complex reaction schemes is accompanied by high computational cost in both 1D and 3-D CFD frameworks. The combustion research community has addressed such challenge via two main approaches: 1) tailor made mechanism reduction strategies; 2) pre-tabulation of the chemistry process and look-up during run-time. The present work covers both topics, although much of the methodology development and validation efforts focused on tabulation. In the first phase of the PhD work, an isomer lumping strategy based on thermodynamic data was developed and applied to a detailed three component reaction mechanism for n-decane, alpha-methylnaphthalene and methyl decanoate comprising 807 species and 7807 reactions. A total of 74 isomer groups were identified within the oxidation of n-decane and methyl decanoate via the assessment of the Gibbs free energy of the isomers. The lumping procedure led to a mechanism of 463 species and 7600 reactions, which was compared against the detailed version over several reactor conditions and over a broad range of temperature, pressure and equivalence ratio. In all cases, excellent agreement between the predictions obtained using the lumped and the detailed mechanism has been observed with an overall absolute error below 12%. In the second phase of the PhD work, a tabulated chemistry approach was developed, implemented and validated against an on-the-fly chemistry solver across different simulation frameworks. As a first attempt, a flamelet-based tabulation method for soot source terms was coupled to the stochastic reactor model and tested against a well stirred reactor-based approach under Diesel engine conditions. The main purpose was to assess and quantify benefits of tabulation within the 0-D SRM framework with respect to soot formation only. Subsequently, a latent enthalpy (h298) based approach was developed and implemented within the SRM model to predict both combustion and emission formation. This approach was widely validated against the detailed on-the-fly solver solutions under 0-D reactor conditions as well as Diesel engine conditions for a wide range of operating points. Good agreement was found between the two solvers and a remarkable speed-up was obtained in terms of computational costs of the simulation. As a last step, the same tabulated chemistry solver was coupled to a commercial CFD software via user defined functions and performances were assessed against the built-in on-the fly chemistry solver under Diesel engine sector simulations. The tabulated chemistry solver proved to be within an acceptable level of accuracy for engineering studies and showed a consistent speed-up in comparison to the online chemistry solver. Across all the investigated frameworks, the developed tabulated chemistry solver was found to be a valid solution to speed-up simulation time without compromising accuracy of the solution for combustion and emissions predictions for engine applications. In fact, the much-reduced CPU times allowed the SRM to be included in broader engine development campaigns where multi-objective optimization methods where efficiently used to explore new engine designs

    Image processing in medicine advances for phenotype characterization, computer-assisted diagnosis and surgical planning

    Get PDF
    En esta Tesis presentamos nuestras contribuciones al estado del arte en procesamiento digital de imágenes médicas, articulando nuestra exposición en torno a los tres principales objetivos de la adquisición de imágenes en medicina: la prevención, el diagnóstico y el tratamiento de las enfermedades. La prevención de la enfermedad se puede conseguir a veces mediante una caracterización cuidadosa de los fenotipos propios de la misma. Tal caracterización a menudo se alcanza a partir de imágenes. Presentamos nuestro trabajo en caracterización del enfisema pulmonar a partir de imágenes TAC (Tomografía Axial Computerizada) de tórax en alta resolución, a través del análisis de las texturas locales de la imagen. Nos proponemos llenar el vacío existente entre la práctica clínica actual, y las sofisticadas pero costosas técnicas de caracterización de regiones texturadas, disponibles en la literatura. Lo hacemos utilizando la distribución local de intensidades como un descriptor adecuado para determinar el grado de destrucción de tejido en pulmones enfisematosos. Se presentan interesantes resultados derivados del análisis de varios cientos de imágenes para niveles variables de severidad de la enfermedad, sugiriendo tanto la validez de nuestras hipótesis, como la pertinencia de este tipo de análisis para la comprensión de la enfermedad pulmonar obstructiva crónica. El procesado de imágenes médicas también puede asistir en el diagnóstico y detección de enfermedades. Presentamos nuestras contribuciones a este campo, que consisten en técnicas de segmentación y cuantificación de imágenes dermatoscópicas de lesiones de la piel. La segmentación se obtiene mediante un novedoso algoritmo basado en contornos activos que explota al máximo el contenido cromático de las imágenes, gracias a la maximización de la discrepancia mediante comparaciones cross-bin. La cuantificación de texturas en lesiones melanocíticas se lleva a cabo utilizando un modelado de los patrones de pigmentación basado en campos aleatorios de Markov, en un esfuerzo por adoptar la tendencia emergente en dermatología: la detección de la malignidad mediante el análisis de la irregularidad de la textura. Los resultados para ambas técnicas son validados con un conjunto significativo de imágenes dermatológicas, sugiriendo líneas interesantes para la detección automática del melanoma maligno. Cuando la enfermedad ya está presente, el tratamiento digital de imágenes puede asistir en la planificación quirúrgica y la intervención guiada por imagen. La planificación terapeútica, ejemplicada por la planificación de cirugía plástica usando realidad virtual, se aborda en nuestro trabajo en segmentación de hueso/grasa/músculo en imágenes TAC. Usando un abordaje interactivo e incremental, nuestro sistema permite obtener segmentaciones precisas a partir de unos cuantos clics de ratón para una gran variedad de condiciones de adquisición y frente a anatomícas anormales. Presentamos nuestra metodología, y nuestra validación experimental profusa basada tanto en segmentaciones manuales como en valoraciones subjetivas de los usuarios, e indicamos referencias al lector que detallan los beneficios obtenidos con el uso de la plataforma de planifificación que utiliza nuestro algoritmo. Como conclusión presentamos una disertación final sobre la importancia de nuestros resultados y las líneas probables de trabajo futuro hacía el objetivo último de mejorar el cuidado de la salud mediante técnicas de tratamiento digital de imágenes médicas.In this Thesis we present our contributions to the state-of-the-art in medical image processing, articulating our exposition around the three main roles of medical imaging: disease prevention, diagnosis and treatment. Disease prevention can sometimes be achieved by proper characterization of disease phenotypes. Such characterization is often attained from the standpoint of imaging. We present our work in characterization of emphysema from highresolution computed-tomography images via quanti_cation of local texture. We propose to _ll the gap between current clinical practice and sophisticated texture approaches by the use of local intensity distributions as an adequate descriptor for the degree of tissue destruction in the emphysematous lung. Interesting results are presented from the analysis of several hundred datasets of lung CT for varying disease severity, suggesting both the correctness of our hypotheses and the pertinence of _ne emphysema quanti_cation for understanding of chronic obstructive pulmonary disease. Medical image processing can also assist in the diagnosis and detection of disease. We introduce our contributions to this_eld, consisting of segmentation and quanti_cation techniques in application to dermatoscopy images of skin lesions. Segmentation is achieved via a novel active contour algorithm that fully exploits the color content of the images, via cross-bin histogram dissimilarity maximization. Texture quanti_cation in the context of melanocytic lesions is performed using modelization of the pigmentation patterns via Markov random elds, in an e_ort to embrace the emerging trend in dermatology: malignancy assessment based on texture irregularity analysis. Experimental results for both, the segmentation and quanti_cation proposed techniques, will be validated on a signi_cant set of dermatoscopy images, suggesting interesting pathways towards automatic detection and diagnosis of malignant melanoma. Once disease has occurred, image processing can assist in therapeutical planning and image-guided intervention. Therapeutical planning, exempli_ed by virtual reality surgical planning, is tackled by our work in segmentation of bone/fat/muscle in CT images for plastic surgery planning. Using an interactive, incremental approach, our system is able to provide accurate segmentations based on a couple of mouse-clicks for a wide variety of imaging conditions and abnormal anatomies. We present our methodology, and provide profuse experimental validation based on manual segmentations and subjective assessment, and refer the reader to related work reporting on the clinical bene_ts obtained using the virtual reality platform hosting our algorithm. As a conclusion we present a _nal dissertation on the signi_cance of our results and the probable lines of future work towards fully bene_tting healthcare using medical image processing

    Aeronautical Engineering: A special bibliography with indexes, supplement 67, February 1976

    Get PDF
    This bibliography lists 341 reports, articles, and other documents introduced into the NASA scientific and technical information system in January 1976

    An Integrated Engineering-Computation Framework for Collaborative Engineering: An Application in Project Management

    Get PDF
    Today\u27s engineering applications suffer from a severe integration problem. Engineering, the entire process, consists of a myriad of individual, often complex, tasks. Most computer tools support particular tasks in engineering, but the output of one tool is different from the others\u27. Thus, the users must re-enter the relevant information in the format required by another tool. Moreover, usually in the development process of a new product/process, several teams of engineers with different backgrounds/responsibilities are involved, for example mechanical engineers, cost estimators, manufacturing engineers, quality engineers, and project manager. Engineers need a tool(s) to share technical and managerial information and to be able to instantly access the latest changes made by one member, or more, in the teams to determine right away the impacts of these changes in all disciplines (cost, time, resources, etc.). In other words, engineers need to participate in a truly collaborative environment for the achievement of a common objective, which is the completion of the product/process design project in a timely, cost effective, and optimal manner. In this thesis, a new framework that integrates the capabilities of four commercial software, Microsoft Excel™ (spreadsheet), Microsoft Project™ (project management), What\u27s Best! (an optimization add-in), and Visual Basic™ (programming language), with a state-of-the-art object-oriented database (knowledge medium), InnerCircle2000™ is being presented and applied to handle the Cost-Time Trade-Off problem in project networks. The result was a vastly superior solution over the conventional solution from the viewpoint of data handling, completeness of solution space, and in the context of a collaborative engineering-computation environment

    Multi-Modality Human Action Recognition

    Get PDF
    Human action recognition is very useful in many applications in various areas, e.g. video surveillance, HCI (Human computer interaction), video retrieval, gaming and security. Recently, human action recognition becomes an active research topic in computer vision and pattern recognition. A number of action recognition approaches have been proposed. However, most of the approaches are designed on the RGB images sequences, where the action data was collected by RGB/intensity camera. Thus the recognition performance is usually related to various occlusion, background, and lighting conditions of the image sequences. If more information can be provided along with the image sequences, more data sources other than the RGB video can be utilized, human actions could be better represented and recognized by the designed computer vision system.;In this dissertation, the multi-modality human action recognition is studied. On one hand, we introduce the study of multi-spectral action recognition, which involves the information from different spectrum beyond visible, e.g. infrared and near infrared. Action recognition in individual spectra is explored and new methods are proposed. Then the cross-spectral action recognition is also investigated and novel approaches are proposed in our work. On the other hand, since the depth imaging technology has made a significant progress recently, where depth information can be captured simultaneously with the RGB videos. The depth-based human action recognition is also investigated. I first propose a method combining different type of depth data to recognize human actions. Then a thorough evaluation is conducted on spatiotemporal interest point (STIP) based features for depth-based action recognition. Finally, I advocate the study of fusing different features for depth-based action analysis. Moreover, human depression recognition is studied by combining facial appearance model as well as facial dynamic model

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme
    corecore