13 research outputs found

    Influence of photoinitiator system and reducing agent on cure efficiency and color stability of experimental resin-based composites using different LED wavelengths

    Get PDF
    Orientadores: Mario Alexandre Coelho Sinhoreti, Lourenço Correr SobrinhoTese (doutorado) Universidade Estadual de Campinas, Faculdade de Odontologia de PiracicabaResumo: O objetivo neste estudo foi avaliar o efeito de fotoiniciadores e agentes redutores na eficiência de cura e estabilidade de cor de compósitos experimentais fotoativadas por diodos emissores de luz (LEDs) com diferentes comprimentos de onda. Para tanto, esse estudo foi realizado em duas partes. Na primeira, a substituição total do sistema convencional à base de canforquinona (CQ) por fotoiniciadores alternativos foi avaliada utilizando-se fenil-propanodiona (PPD), óxido mono-alquil fosfínico (TPO) e óxido bis-alquil fosfínico (BAPO). Para o sistema convencional (CQ), diferentes agentes redutores foram testados: dimetil aminoetil metacrilato (DMAEMA), 4-dimetilamino benzoato (EDAB) e álcool dimetilamino feniletílico (DMPOH). Na segunda parte, substituições parciais foram testadas utilizando-se combinações com 75/25, 50/50 e 25/75 mol% de CQ para TPO. Para fotoativação, foram utilizados LEDs com emissão de comprimentos de onda azul (420-495 nm), e/ou azul e violeta (380-495 nm). A eficiência de cura foi avaliada por meio de espectroscopia em infravermelho com transformada de Fourier e a cor e estabilidade de cor por meio de espectrofotometria. A estabilidade de cor foi analisada antes e após a fotoativação e após envelhecimento artificial acelerado. Adicionalmente, a transmissão de luz foi avaliada para explicar os resultados de eficiência de cura em profundidade. Todos os dados foram analisados pela ANOVA e teste de Tukey para comparações múltiplas (?=0,05). Testes de regressão também foram utilizados. Na primeira parte do estudo foi verificado que os compósitos contendo CQ apresentaram maior alteração de cor após a fotoativação. Entretanto, essa alteração de cor ocorreu principalmente dentro do eixo azul-amarelo. Assim, após a fotoativação, houve uma redução no grau de amarelo. Quanto à transmissão de luz, foi observado que apesar da menor transmissão de luz pelos compósitos contendo CQ em comparação aos contendo fotoiniciadores alternativos, maior alteração na transmissão de luz ocorreu durante a polimerização. Consequentemente, os compósitos contendo CQ apresentaram maior transmissão de luz após fotoativação. Forte correlação foi encontrada entre a alteração de cor do compósito durante a cura e alteração na transmissão de luz. Devido aos resultados não favoráveis com o PPD, este foi excluído dos demais testes no estudo. Quanto à estabilidade de cor, os compósitos contendo óxidos fosfínicos apresentaram maior alteração de cor em comparação as diferentes formulações contendo CQ. A formulação com menor alteração de cor foi CQ/DMPOH. Entretanto, os compósitos contendo CQ tornaram-se mais amarelos, enquanto que, os materiais contendo BAPO ou TPO, mais brancos e menos amarelos após o envelhecimento. Na segunda parte, foi verificado que para materiais contendo CQ e TPO, maiores percentuais do óxido fosfínico reduziram amarelamento e alteração de cor do compósito, tanto após a fotoativação como ao longo do tempo. Entretanto, a substituição da CQ a partir de 75 mol% provocou redução do grau de conversão. Desta forma, foi possível concluir que a substituição total da CQ por óxidos fosfínicos não parece ser a solução mais adequada para compósitos, podendo, ainda, prejudicar a eficiência de cura em profundidade. Já a substituição parcial da CQ por óxidos fosfínicos, pode melhorar a estabilidade de cor sem prejudicar a polimerização em profundidadeAbstract: The aim of this study was to evaluate the effect of photoinitiators and reducing agents on cure efficiency and color stability of resin-based composites using different LED wavelengths. Then, this study was performed in two parts. First, a total substitution of the conventional photoinitiator system based on camphorquinone was evaluated using alternative photoinitiators, phenil propanedione (PPD), diphenyl(2,4,6-trimethylbenzoyl)phosphine oxide (TPO) and phenylbis(2,4,6-trimethylbenzoyl)phosphine oxide (BAPO). For the conventional system based on camphorquinone, different reducing agents were tested: 2-(dimethylamino) ethyl methacrylate (DMAEMA), ethyl 4-(dimethyamino)benzoate (EDAB) and/or 4-(N,N-dimethylamino) phenethyl alcohol (DMPOH). In the second part of the study, partial substitutions were tested using 75/25, 50/50, 25/75 mol% combination of CQ and TPO. LEDs with blue wavelength (420-495 nm) and/or blue and violet wavelength (380-495 nm) were used for photo-activation. Cure efficiency was evaluated using Fourier transform infrared spectroscopy (FT-IR) and, color and color stability using spectrophotometry. Color stability was evaluated before and after photoactivation and after artificial accelerated aging. Additionally, light-transmittance was also evaluated in order to explain the cure efficiency results in depth. Data were analyzed using analysis of variance and Tukey¿s test for multiple comparisons (?=0.05) and power analysis with 0.8. Regression tests were also performed. In the first part of the study, CQ-based composites showed higher color change after photoactivation. However, this color change occurred specially in the blue-yellow axis. Thus, after photoactivation, the degree of yellow was reduced. It was also observed that despite the lower light-transmittance through the composite containing CQ in comparison to the composites containing alternative photoinitiators, higher light-transmittance change occurred during photoactivation. Consequently, composites containing CQ showed the highest light-transmittance after photoactivation. High correlation was found between color change and light-transmittance change. Due unfavorable results using PPD, this alternative photoinitiator was excluded in the latter tests of the study. Composites containing phosphine oxides showed higher color change after aging in comparison to composites containing CQ. CQ/DMPOH was the formulation with less color change after aging. However, CQ-based composites became more yellow, while BAPO- and TPO-, whitener and less yellow after age. In the second part of the study, for composites containing CQ and TPO, higher TPO concentrations reduced the yellowing and color change of the composite. However, CQ substitution starting at 75 mol% reduced the degree of conversion of the resin-based composites in depth. Thus, it was possible to conclude that total substitution of CQ for phosphine oxides seems not be the more adequate solution to be used in composites, as it can affect degree of conversion in depth. The partial substitution of CQ using phosphine oxides, however, can improve color stability without affecting the depth of cureDoutoradoMateriais DentariosDoutora em Materiais Dentários2013/04241-2, 2014/03028-6FAPES

    Design and Optimization of Graph Transform for Image and Video Compression

    Get PDF
    The main contribution of this thesis is the introduction of new methods for designing adaptive transforms for image and video compression. Exploiting graph signal processing techniques, we develop new graph construction methods targeted for image and video compression applications. In this way, we obtain a graph that is, at the same time, a good representation of the image and easy to transmit to the decoder. To do so, we investigate different research directions. First, we propose a new method for graph construction that employs innovative edge metrics, quantization and edge prediction techniques. Then, we propose to use a graph learning approach and we introduce a new graph learning algorithm targeted for image compression that defines the connectivities between pixels by taking into consideration the coding of the image signal and the graph topology in rate-distortion term. Moreover, we also present a new superpixel-driven graph transform that uses clusters of superpixel as coding blocks and then computes the graph transform inside each region. In the second part of this work, we exploit graphs to design directional transforms. In fact, an efficient representation of the image directional information is extremely important in order to obtain high performance image and video coding. In this thesis, we present a new directional transform, called Steerable Discrete Cosine Transform (SDCT). This new transform can be obtained by steering the 2D-DCT basis in any chosen direction. Moreover, we can also use more complex steering patterns than a single pure rotation. In order to show the advantages of the SDCT, we present a few image and video compression methods based on this new directional transform. The obtained results show that the SDCT can be efficiently applied to image and video compression and it outperforms the classical DCT and other directional transforms. Along the same lines, we present also a new generalization of the DFT, called Steerable DFT (SDFT). Differently from the SDCT, the SDFT can be defined in one or two dimensions. The 1D-SDFT represents a rotation in the complex plane, instead the 2D-SDFT performs a rotation in the 2D Euclidean space

    Image fidelity assessment and its applications

    Get PDF

    Studies of practical daylight simulators for industrial colour quality control

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Understanding perceived quality through visual representations

    Get PDF
    The formatting of images can be considered as an optimization problem, whose cost function is a quality assessment algorithm. There is a trade-off between bit budget per pixel and quality. To maximize the quality and minimize the bit budget, we need to measure the perceived quality. In this thesis, we focus on understanding perceived quality through visual representations that are based on visual system characteristics and color perception mechanisms. Specifically, we use the contrast sensitivity mechanisms in retinal ganglion cells and the suppression mechanisms in cortical neurons. We utilize color difference equations and color name distances to mimic pixel-wise color perception and a bio-inspired model to formulate center surround effects. Based on these formulations, we introduce two novel image quality estimators PerSIM and CSV, and a new image quality-assistance method BLeSS. We combine our findings from visual system and color perception with data-driven methods to generate visual representations and measure their quality. The majority of existing data-driven methods require subjective scores or degraded images. In contrast, we follow an unsupervised approach that only utilizes generic images. We introduce a novel unsupervised image quality estimator UNIQUE, and extend it with multiple models and layers to obtain MS-UNIQUE and DMS-UNIQUE. In addition to introducing quality estimators, we analyze the role of spatial pooling and boosting in image quality assessment.Ph.D

    Caracterización y modelización del recubrimiento de tabletas farmacéuticas, para el desarrollo de soluciones de reconocimiento óptico visible

    Get PDF
    Tesis de Graduación (Doctorado Académico en Ingeniería) Instituto Tecnológico de Costa Rica, Área académica de Doctorado en Ingeniería; Universidad de Costa Rica, Facultad de Ingeniería, 2020El trabajo de investigación, desarrollado bajo el título “Caracterización y modelización de tabletas farmacéuticas, para el desarrollo de soluciones de reconocimiento óptico-visible”, tiene el propósito de contribuir a desarrollar, las bases teóricas y experimentales que, resuelven uno de los muchos retos de ingeniería, más relevantes del milenio. Estos retos fueron planteados en su momento por la FDA para la industria farmacéutica, que busca, durante el proceso productivo, reducir drásticamente la vulnerabilidad e intervención directa y dependencia subjetiva en ambientes sépticos, eliminar las paradas y pruebas destructivas bajo un enfoque cualitativo integral desde el diseño y gestión del riesgo. El proyecto está enfocado en el proceso de recubrimiento de tabletas farmacéuticas, cuya complejidad, además del bajo nivel de automatización, alto volumen de variables y dependencia de criterios expertos, multiplicidad de sustancias involucradas, distintos tipos de morfologías, dimensiones y propósitos de los productos, alto volumen de variables y delicadísima dependencia de aspectos regulatorios y sobre todo por su alto impacto en la salud de las personas, requiere, del desarrollo de soluciones para el control de la aplicación a lo largo del proceso del recubrimiento, en ambientes confinados, mediante aspersión aleatoria, hasta alcanzar el espesor uniforme para los propósitos del medicamento. En el proyecto, se sentaron las bases experimentales, acordes con los retos mencionados, cuyo primer paso ha sido la caracterización de tabletas farmacéuticas, y la identificación del comportamiento de las variables clave, durante el proceso de recubrimiento, que, principalmente contribuyan en el desarrollo de soluciones de reconocimiento óptico-visible, que ha sido la ruta seleccionada para la solución de este importante reto. Como parte de los resultados, se propuso, con base en los hallazgos de la caracterización, un modelo de simulación de la evolución de las principales variables identificadas. Los resultados obtenidos en la investigación permitieron establecer un modelo prometedor, cuyos efectos trascendentes permiten, no solo vislumbrar el cumplimiento de los objetivos de la FDA, sino que también sientan las bases para innovar en los procesos de recubrimiento en el marco de la industria de cuarta generación, de las cuales, la industria farmacéutica en Costa Rica, se verá altamente beneficiada, pues justamente esta ha sido, una de las grandes limitaciones competitivas externadas por el sector, frente a otras industrias.The research work developed under the title "Characterization and modeling of pharmaceutical tablets, for the development of optical-visible recognition solutions", aims to contribute to the development, of the theoretical and experimental bases that solve one of the many relevant engineering challenges of the millennium. These challenges, raised at the time by the FDA for the pharmaceutical industry, which seeks, during the production process, to drastically reduce vulnerability and direct intervention and subjective dependence in septic environments, to eliminate stoppages and destructive tests under a comprehensive qualitative approach from the design and risk management. The project is focused on the process of coating pharmaceutical tablets, whose complexity, in addition to the low level of automation, high volume of variables and dependence on expert criteria, multiplicity of substances involved, different types of morphologies, dimensions and purposes of the products, high volume of variables and very delicate dependence on regulatory aspects and especially for its high impact on people's health, requires the development of solutions to control the application throughout the coating process, in confined environments, by spraying randomly, until uniform thickness is reached for the purposes of the medication. In the project, the experimental foundations were laid, in accordance with the aforementioned challenges, the first step of which is the characterization of pharmaceutical tablets, and the identification of the behavior of key variables during the coating process, which mainly contribute to the development of optical-visible recognition solutions, which has been the chosen route for solving this important challenge. As part of the results, based on the characterization findings, a simulation model of the evolution of the main variables identified was proposed. The results obtained in the research allowed establishing a promising model, whose transcendental effects allow not only a glimpse of the fulfillment of the FDA's objectives, but also lay the foundations to innovate in the coating processes in the framework of fourth generation industries. The pharmaceutical industry in Costa Rica will be highly benefited, because this has been precisely one of the great competitive limitations externalized by the sector, compared to other industries

    Dynamic data structures and saliency-influenced rendering

    Get PDF
    With increasing heterogeneity of modern hardware, different requirements for 3d applications arise. Despite the fact that real-time rendering of photo-realistic images is possible using today’s graphics cards, still large computational effort is required. Furthermore, smart-phones or computers with older, less powerful graphics cards may not be able to reproduce these results. To retain interactive rendering, usually the detail of a scene is reduced, and so less data needs to be processed. This removal of data, however, may introduce errors, so called artifacts. These artifacts may be distracting for a human spectator when gazing at the display. Thus, the visual quality of the presented scene is reduced. This is counteracted by identifying features of an object that can be removed without introducing artifacts. Most methods utilize geometrical properties, such as distance or shape, to rate the quality of the performed reduction. This information used to generate so called Levels Of Detail (LODs), which are made available to the rendering system. This reduces the detail of an object using the precalculated LODs, e.g. when it is moved into the back of the scene. The appropriate LOD is selected using a metric, and it is replaced with the current displayed version. This exchange must be made smoothly, requiring both LOD-versions to be drawn simultaneously during a transition. Otherwise, this exchange will introduce discontinuities, which are easily discovered by a human spectator. After completion of the transition, only the newly introduced LOD-version is drawn and the previous overhead removed. These LOD-methods usually operate with discrete levels and exploit limitations of both the display and the spectator: the human. Humans are limited in their vision. This ranges from being unable to distinct colors at varying illumination scenarios to the limitation to focus only at one location at a time. Researchers have developed many applications to exploit these limitations to increase the quality of an applied compression. Some popular methods of vision-based compression are MPEG or JPEG. For example, a JPEG compression exploits the reduced sensitivity of humans regarding color and so encodes colors with a lower resolution. Also, other fields, such as auditive perception, allow the exploitation of human limitations. The MP3 compression, for example, reduces the quality of stored frequencies if other frequencies are masking it. For representation of perception various computer models exist. In our rendering scenario, a model is advantageous that cannot be influenced by a human spectator, such as the visual salience or saliency. Saliency is a notion from psycho-physics that determines how an object “pops out” of its surrounding. These outstanding objects (or features) are important for the human vision and are directly evaluated by our Human Visual System (HVS). Saliency combines multiple parts of the HVS and allows an identification of regions where humans are likely to look at. In applications, saliency-based methods have been used to control recursive or progressive rendering methods. Especially expensive display methods, such as pathtracing or global illumination calculations, benefit from a perceptual representation as recursions or calculations can be aborted if only small or unperceivable errors are expected to occur. Yet, saliency is commonly applied to 2d images, and an extension towards 3d objects has only partially been presented. Some issues need to be addressed to accomplish a complete transfer. In this work, we present a smart rendering system that not only utilizes a 3d visual salience model but also applies the reduction in detail directly during rendering. As opposed to normal LOD-methods, this detail reduction is not limited to a predefined set of levels, but rather a dynamic and continuous LOD is created. Furthermore, to apply this reduction in a human-oriented way, a universal function to compute saliency of a 3d object is presented. The definition of this function allows to precalculate and store object-related visual salience information. This stored data is then applicable in any illumination scenario and allows to identify regions of interest on the surface of a 3d object. Unlike preprocessed methods, which generate a view-independent LOD, this identification includes information of the scene as well. Thus, we are able to define a perception-based, view-specific LOD. Performance measures of a prototypical implementation on computers with modern graphic cards achieved interactive frame rates, and several tests have proven the validity of the reduction. The adaptation of an object is performed with a dynamic data structure, the TreeCut. It is designed to operate on hierarchical representations, which define a multi-resolution object. In such a hierarchy, the leaf nodes contain the highest detail while inner nodes are approximations of their respective subtree. As opposed to classical hierarchical rendering methods, a cut is stored and re-traversal of a tree during rendering is avoided. Due to the explicit cut representation, the TreeCut can be altered using only two core operations: refine and coarse. The refine-operation increases detail by replacing a node of the tree with its children while the coarse-operation removes the node along with its siblings and replaces them with their parent node. These operations do not rely on external information and can be performed in a local manner. These only require direct successor or predecessor information. Different strategies to evolve the TreeCut are presented, which adapt the representation using only information given by the current cut. These evaluate the cut by assigning either a priority or a target-level (or bucket) to each cut-node. The former is modelled as an optimization problem that increases the average priority of a cut while being restricted in some way, e.g. in size. The latter evolves the cut to match a certain distribution. This is applied in cases where a prioritization of nodes is not applicable. Both evaluation strategies operate with linear time complexity with respect to the size of the current TreeCut. The data layout is chosen to separate rendering data and hierarchy to enable multi-threaded evaluation and display. The object is adapted over multiple frames while the rendering is not interrupted by the used evaluation strategy. Therefore, we separate the representation of the hierarchy from the rendering data. Due to its design, this overhead imposed to the TreeCut data structure does not influence rendering performance, and a linear time complexity for rendering is retained. The TreeCut is not only limited to alter geometrical detail of an object. The TreeCut has successfully been applied to create a non-photo-realistic stippling display, which draws the object with equal sized points in varying density. In this case the bucket-based evaluation strategy is utilized, which determines the distribution of the cut based on local illumination information. As an alternative, an attention drawing mechanism is proposed, which applies the TreeCut evaluation strategies to define the display style of a notification icon. A combination of external priorities is used to derive the appropriate icon version. An application for this mechanism is a messaging system that accounts for the current user situation. When optimizing an object or scene, perceptual methods allow to account for or exploit human limitations. Therefore, visual salience approaches derive a saliency map, which encodes regions of interest in a 2d map. Rendering algorithms extract importance from such a map and adapt the rendering accordingly, e.g. abort a recursion when the current location is unsalient. The visual salience depends on multiple factors including the view and the illumination of the scene. We extend the existing definition of the 2d saliency and propose a universal function for 3d visual salience: the Bidirectional Saliency Weight Distribution Function (BSWDF). Instead of extracting the saliency from 2d image and approximate 3d information, we directly compute this information using the 3d data. We derive a list of equivalent features for the 3d scenario and add them to the BSWDF. As the BSWDF is universal, also 2d images are covered with the BSWDF, and the calculation of the important regions within images is possible. To extract the individual features that contribute to visual salience, capabilities of modern graphics card in combination with an accumulation method for rendering is utilized. Inspired from point-based rendering methods local features are summed up in a single surface element (surfel) and are compared with their surround to determine whether they “pop out”. These operations are performed with a shader-program that is executed on the Graphics Processing Unit (GPU) and has direct access to the 3d data. This increases processing speed because no transfer of the data is required. After computation, each of these object-specific features can be combined to derive a saliency map for this object. Surface specific information, e.g. color or curvature, can be preprocessed and stored onto disk. We define a sampling scheme to determine the views that need to be evaluated for each object. With these schemes, the features can be interpolated for any view that occurs during rendering, and the according surface data is reconstructed. These sampling schemes compose a set of images in form of a lookup table. This is similar to existing rendering techniques, which extract illumination information from a lookup. The size of the lookup table increases only with the number of samples or the image size used for creation as the images are of equal size. Thus, the quality of the saliency data is independent of the object’s geometrical complexity. The computation of a BSWDF can be performed either on a Central Processing Unit (CPU) or a GPU, and an implementation requires only a few instructions when using a shader program. If the surface features have been stored during a preprocess, a reprojection of the data is performed and combined with the current information of the object. Once the data is available, the computation of the saliency values is done using a specialized illumination model, and a priority for each primitive is extracted. If the GPU is used, the calculated data has to be transferred from the graphics card. We therefore use the “transform feedback” capabilities, which allow high transfer rates and preserve the order of processed primitives. So, an identification of regions of interest based on the currently used primitives is achieved. The TreeCut evaluation strategies are then able to optimize the representation in an perception-based manner. As the adaptation utilizes information of the current scene, each change to an object can result in new visual salience information. So, a self-optimizing system is defined: the Feedback System. The output generated by this system converges towards a perception-optimized solution. To proof the saliency information to be useful, user tests have been performed with the results generated by the proposed Feedback System. We compared a saliency-enhanced object compression to a pure geometrical approach, common for LOD-generation. One result of the tests is that saliency information allows to increase compression even further as possible with the pure geometrical methods. The participants were not able to distinguish between objects even if the saliency-based compression had only 60% of the size of the geometrical reduced object. If the size ratio is greater, saliency-based compression is rated, on average, with higher score and these results have a high significance using statistical tests. The Feedback System extends an 3d object with the capability of self-optimization. Not only geometrical detail but also other properties can be limited and optimized using the TreeCut in combination with a BSWDF. We present a dynamic animation, which utilizes a Software Development Kit (SDK) for physical simulations. This was chosen, on the one hand, to show the universal applicability of the proposed system, and on the other hand, to focus on the connection between the TreeCut and the SDK. We adapt the existing framework, and include the SDK within our design. In this case, the TreeCut-operations not only alter geometrical but also simulation detail. This increases calculation performance because both the rendering and the SDK operate on less data after the reduction has been completed. The selected simulation type is a soft-body simulation. Soft-bodies are deformable in a certain degree but retain their internal connection. An example is a piece of cloth that smoothly fits the underlying surface without tearing apart. Other types are rigid bodies, i.e. idealistic objects that cannot be deformed, and fluids or gaseous materials, which are well suited for point-based simulations. Any of these simulations scales with the number of simulation nodes used, and a reduction of detail increases performance significantly. We define a specialized BSWDF to evaluate simulation specific features, such as motion. The Feedback System then increases detail in highly salient regions, e.g. those with large motion, and saves computation time by reducing detail in static parts of the simulation. So, detail of the simulation is preserved while less nodes are simulated. The incorporation of perception in real-time rendering is an important part of recent research. Today, the HVS is well understood, and valid computer models have been derived. These models are frequently used in commercial and free software, e.g. JPEG compression. Within this thesis, the Tree-Cut is presented to change the LOD of an object in a dynamic and continuous manner. No definition of the individual levels in advance is required, and the transitions are performed locally. Furthermore, in combination with an identification of important regions by the BSWDF, a perceptual evaluation of a 3d object is achieved. As opposed to existing methods, which approximate data from 2d images, the perceptual information is directly acquired from 3d data. Some of this data can be preprocessed if necessary, to defer additional computations during rendering. The Feedback System, created by the TreeCut and the BSWDF, optimizes the representation and is not limited to visual data alone. We have shown with our prototype that interactive frame rates can be achieved with modern hardware, and we have proven the validity of the reductions by performing several user tests. However, the presented system only focuses on specific aspects, and more research is required to capture even more capabilities that a perception-based rendering system can provide
    corecore