204 research outputs found

    Optimization of four-primary white LEDs based on protective effect and color quality-a solution for museum illumination

    Get PDF
    A solution was proposed for obtaining white light-emitting-diodes (LEDs) which are suitable for illuminating traditional Chinese paintings painted with inorganic pigments (iop-TCPs) based on the requirements of protective illumination and color quality. The damage laws and degrees of 450nm, 510nm, 583nm, and 650nm monochromatic lights which can construct four-primary white LEDs on the iop-TCPs were obtained through long-term illumination experiment and data analysis by converting color coordinates into CIE DE2000 color difference values. Then we obtained the damage formula of the constructed white LEDs, which can be used to evaluate damage degree. Spectral power distributions (SPDs) of the white LEDs, which can be iterated by brute-force algorithm, were simulated by the Gaussian formula. Constructed SPDs were evaluated by the damage formula and color quality formulas. The color quality eligible white LEDs with higher correlated color temperatures (CCTs) damage less to iop-TCPs. And the lowest damage SPDs satisfying color quality requirements in CCT ranges from 2700K to 4000K were obtained. Achievements can provide the theory and application basis for manufacturing white LEDs suitable for illuminating iop-TCPs; and the method can be further used in preparing white LEDs applicable to other cultural relics

    Adaptive kernel estimation for enhanced filtering and pattern classification of magnetic resonance imaging: novel techniques for evaluating the biomechanics and pathologic conditions of the lumbar spine

    Get PDF
    This dissertation investigates the contribution the lumbar spine musculature has on etiological and pathogenic characteristics of low back pain and lumbar spondylosis. This endeavor necessarily required a two-step process: 1) design of an accurate post-processing method for extracting relevant information via magnetic resonance images and 2) determine pathological trends by elucidating high-dimensional datasets through multivariate pattern classification. The lumbar musculature was initially evaluated by post-processing and segmentation of magnetic resonance (MR) images of the lumbar spine, which characteristically suffer from nonlinear corruption of the signal intensity. This so called intensity inhomogeneity degrades the efficacy of traditional intensity-based segmentation algorithms. Proposed in this dissertation is a solution for filtering individual MR images by extracting a map of the underlying intensity inhomogeneity to adaptively generate local estimates of the kernel’s optimal bandwidth. The adaptive kernel is implemented and tested within the structure of the non-local means filter, but also generalized and extended to the Gaussian and anisotropic diffusion filters. Testing of the proposed filters showed that the adaptive kernel significantly outperformed their non-adaptive counterparts. A variety of performance metrics were utilized to measure either fine feature preservation or accuracy of post-processed segmentation. Based on these metrics the adaptive filters proposed in this dissertation significantly outperformed the non-adaptive versions. Using the proposed filter, the MR data was semi-automatically segmented to delineate between adipose and lean muscle tissues. Two important findings were reached utilizing this data. First, a clear distinction between the musculature of males and females was established that provided 100% accuracy in being able to predict gender. Second, degenerative lumbar spines were accurately predicted at a rate of up to 92% accuracy. These results solidify prior assumptions made regarding sexual dimorphic anatomy and the pathogenic nature of degenerative spine disease

    Field D* pathfinding in weighted simplicial complexes

    Get PDF
    Includes abstract.Includes bibliographical references.The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D

    Field D* Pathfinding in Weighted Simplicial Complexes

    Get PDF
    The development of algorithms to efficiently determine an optimal path through a complex environment is a continuing area of research within Computer Science. When such environments can be represented as a graph, established graph search algorithms, such as Dijkstra’s shortest path and A*, can be used. However, many environments are constructed from a set of regions that do not conform to a discrete graph. The Weighted Region Problem was proposed to address the problem of finding the shortest path through a set of such regions, weighted with values representing the cost of traversing the region. Robust solutions to this problem are computationally expensive since finding shortest paths across a region requires expensive minimisation. Sampling approaches construct graphs by introducing extra points on region edges and connecting them with edges criss-crossing the region. Dijkstra or A* are then applied to compute shortest paths. The connectivity of these graphs is high and such techniques are thus not particularly well suited to environments where the weights and representation frequently change. The Field D* algorithm, by contrast, computes the shortest path across a grid of weighted square cells and has replanning capabilites that cater for environmental changes. However, representing an environment as a weighted grid (an image) is not space-efficient since high resolution is required to produce accurate paths through areas containing features sensitive to noise. In this work, we extend Field D* to weighted simplicial complexes – specifically – triangulations in 2D and tetrahedral meshes in 3D. Such representations offer benefits in terms of space over a weighted grid, since fewer triangles can represent polygonal objects with greater accuracy than a large number of grid cells. By exploiting these savings, we show that Triangulated Field D* can produce an equivalent path cost to grid-based Multi-resolution Field D*, using up to an order of magnitude fewer triangles over grid cells and visiting an order of magnitude fewer nodes. Finally, as a practical demonstration of the utility of our formulation, we show how Field D* can be used to approximate a distance field on the nodes of a simplicial complex, and how this distance field can be used to weight the simplicial complex to produce contour-following behaviour by shortest paths computed with Field D*

    Visual analytics of multidimensional time-dependent trails:with applications in shape tracking

    Get PDF
    Lots of data collected for both scientific and non-scientific purposes have similar characteristics: changing over time with many different properties. For example, consider the trajectory of an airplane travelling from one location to the other. Not only does the airplane itself move over time, but its heading, height and speed are changing at the same time. During this research, we investigated different ways to collect and visualze data with these characteristics. One practical application being for an automated milking device which needs to be able to determine the position of a cow's teats. By visualizing all data which is generated during the tracking process we can acquire insights in the working of the tracking system and identify possibilites for improvement which should lead to better recognition of the teats by the machine. Another important result of the research is a method which can be used to efficiently process a large amount of trajectory data and visualize this in a simplified manner. This has lead to a system which can be used to show the movement of all airplanes around the world for a period of multiple weeks

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Arquitectura, técnicas y modelos para posibilitar la Ciencia de Datos en el Archivo de la Misión Gaia

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 26/05/2017.The massive amounts of data that the world produces every day pose new challenges to modern societies in terms of how to leverage their inherent value. Social networks, instant messaging, video, smart devices and scientific missions are just mere examples of the vast number of sources generating data every second. As the world becomes more and more digitalized, new needs arise for organizing, archiving, sharing, analyzing, visualizing and protecting the ever-increasing data sets, so that we can truly develop into a data-driven economy that reduces inefficiencies and increases sustainability, creating new business opportunities on the way. Traditional approaches for harnessing data are not suitable any more as they lack the means for scaling to the larger volumes in a timely and cost efficient manner. This has somehow changed with the advent of Internet companies like Google and Facebook, which have devised new ways of tackling this issue. However, the variety and complexity of the value chains in the private sector as well as the increasing demands and constraints in which the public one operates, needs an ongoing research that can yield newer strategies for dealing with data, facilitate the integration of providers and consumers of information, and guarantee a smooth and prompt transition when adopting these cutting-edge technological advances. This thesis aims at providing novel architectures and techniques that will help perform this transition towards Big Data in massive scientific archives. It highlights the common pitfalls that must be faced when embracing it and how to overcome them, especially when the data sets, their transformation pipelines and the tools used for the analysis are already present in the organizations. Furthermore, a new perspective for facilitating a smoother transition is laid out. It involves the usage of higher-level and use case specific frameworks and models, which will naturally bridge the gap between the technological and scientific domains. This alternative will effectively widen the possibilities of scientific archives and therefore will contribute to the reduction of the time to science. The research will be applied to the European Space Agency cornerstone mission Gaia, whose final data archive will represent a tremendous discovery potential. It will create the largest and most precise three dimensional chart of our galaxy (the Milky Way), providing unprecedented position, parallax and proper motion measurements for about one billion stars. The successful exploitation of this data archive will depend to a large degree on the ability to offer the proper architecture, i.e. infrastructure and middleware, upon which scientists will be able to do exploration and modeling with this huge data set. In consequence, the approach taken needs to enable data fusion with other scientific archives, as this will produce the synergies leading to an increment in scientific outcome, both in volume and in quality. The set of novel techniques and frameworks presented in this work addresses these issues by contextualizing them with the data products that will be generated in the Gaia mission. All these considerations have led to the foundations of the architecture that will be leveraged by the Science Enabling Applications Work Package. Last but not least, the effectiveness of the proposed solution will be demonstrated through the implementation of some ambitious statistical problems that will require significant computational capabilities, and which will use Gaia-like simulated data (the first Gaia data release has recently taken place on September 14th, 2016). These ambitious problems will be referred to as the Grand Challenge, a somewhat grandiloquent name that consists in inferring a set of parameters from a probabilistic point of view for the Initial Mass Function (IMF) and Star Formation Rate (SFR) of a given set of stars (with a huge sample size), from noisy estimates of their masses and ages respectively. This will be achieved by using Hierarchical Bayesian Modeling (HBM). In principle, the HBM can incorporate stellar evolution models to infer the IMF and SFR directly, but in this first step presented in this thesis, we will start with a somewhat less ambitious goal: inferring the PDMF and PDAD. Moreover, the performance and scalability analyses carried out will also prove the suitability of the models for the large amounts of data that will be available in the Gaia data archive.Las grandes cantidades de datos que se producen en el mundo diariamente plantean nuevos retos a la sociedad en términos de cómo extraer su valor inherente. Las redes sociales, mensajería instantánea, los dispositivos inteligentes y las misiones científicas son meros ejemplos del gran número de fuentes generando datos en cada momento. Al mismo tiempo que el mundo se digitaliza cada vez más, aparecen nuevas necesidades para organizar, archivar, compartir, analizar, visualizar y proteger la creciente cantidad de datos, para que podamos desarrollar economías basadas en datos e información que sean capaces de reducir las ineficiencias e incrementar la sostenibilidad, creando nuevas oportunidades de negocio por el camino. La forma en la que se han manejado los datos tradicionalmente no es la adecuada hoy en día, ya que carece de los medios para escalar a los volúmenes más grandes de datos de una forma oportuna y eficiente. Esto ha cambiado de alguna manera con la llegada de compañías que operan en Internet como Google o Facebook, ya que han concebido nuevas aproximaciones para abordar el problema. Sin embargo, la variedad y complejidad de las cadenas de valor en el sector privado y las crecientes demandas y limitaciones en las que el sector público opera, necesitan una investigación continua en la materia que pueda proporcionar nuevas estrategias para procesar las enormes cantidades de datos, facilitar la integración de productores y consumidores de información, y garantizar una transición rápida y fluida a la hora de adoptar estos avances tecnológicos innovadores. Esta tesis tiene como objetivo proporcionar nuevas arquitecturas y técnicas que ayudarán a realizar esta transición hacia Big Data en archivos científicos masivos. La investigación destaca los escollos principales a encarar cuando se adoptan estas nuevas tecnologías y cómo afrontarlos, principalmente cuando los datos y las herramientas de transformación utilizadas en el análisis existen en la organización. Además, se exponen nuevas medidas para facilitar una transición más fluida. Éstas incluyen la utilización de software de alto nivel y específico al caso de uso en cuestión, que haga de puente entre el dominio científico y tecnológico. Esta alternativa ampliará de una forma efectiva las posibilidades de los archivos científicos y por tanto contribuirá a la reducción del tiempo necesario para generar resultados científicos a partir de los datos recogidos en las misiones de astronomía espacial y planetaria. La investigación se aplicará a la misión de la Agencia Espacial Europea (ESA) Gaia, cuyo archivo final de datos presentará un gran potencial para el descubrimiento y hallazgo desde el punto de vista científico. La misión creará el catálogo en tres dimensiones más grande y preciso de nuestra galaxia (la Vía Láctea), proporcionando medidas sin precedente acerca del posicionamiento, paralaje y movimiento propio de alrededor de mil millones de estrellas. Las oportunidades para la explotación exitosa de este archivo de datos dependerán en gran medida de la capacidad de ofrecer la arquitectura adecuada, es decir infraestructura y servicios, sobre la cual los científicos puedan realizar la exploración y modelado con esta inmensa cantidad de datos. Por tanto, la estrategia a realizar debe ser capaz de combinar los datos con otros archivos científicos, ya que esto producirá sinergias que contribuirán a un incremento en la ciencia producida, tanto en volumen como en calidad de la misma. El conjunto de técnicas e infraestructuras innovadoras presentadas en este trabajo aborda estos problemas, contextualizándolos con los productos de datos que se generarán en la misión Gaia. Todas estas consideraciones han conducido a los fundamentos de la arquitectura que se utilizará en el paquete de trabajo de aplicaciones que posibilitarán la ciencia en el archivo de la misión Gaia (Science Enabling Applications). Por último, la eficacia de la solución propuesta se demostrará a través de la implementación de dos problemas estadísticos que requerirán cantidades significativas de cómputo, y que usarán datos simulados en el mismo formato en el que se producirán en el archivo de la misión Gaia (la primera versión de datos recogidos por la misión está disponible desde el día 14 de Septiembre de 2016). Estos ambiciosos problemas representan el Gran Reto (Grand Challenge), un nombre grandilocuente que consiste en inferir una serie de parámetros desde un punto de vista probabilístico para la función de masa inicial (Initial Mass Function) y la tasa de formación estelar (Star Formation Rate) dado un conjunto de estrellas (con una muestra grande), desde estimaciones con ruido de sus masas y edades respectivamente. Esto se abordará utilizando modelos jerárquicos bayesianos (Hierarchical Bayesian Modeling). Enprincipio,losmodelospropuestos pueden incorporar otros modelos de evolución estelar para inferir directamente la función de masa inicial y la tasa de formación estelar, pero en este primer paso presentado en esta tesis, empezaremos con un objetivo algo menos ambicioso: la inferencia de la función de masa y distribución de edades actual (Present-Day Mass Function y Present-Day Age Distribution respectivamente). Además, se llevará a cabo el análisis de rendimiento y escalabilidad para probar la idoneidad de la implementación de dichos modelos dadas las enormes cantidades de datos que estarán disponibles en el archivo de la misión Gaia...Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    Visualisation of multi-dimensional medical images with application to brain electrical impedance tomography

    Get PDF
    Medical imaging plays an important role in modem medicine. With the increasing complexity and information presented by medical images, visualisation is vital for medical research and clinical applications to interpret the information presented in these images. The aim of this research is to investigate improvements to medical image visualisation, particularly for multi-dimensional medical image datasets. A recently developed medical imaging technique known as Electrical Impedance Tomography (EIT) is presented as a demonstration. To fulfil the aim, three main efforts are included in this work. First, a novel scheme for the processmg of brain EIT data with SPM (Statistical Parametric Mapping) to detect ROI (Regions of Interest) in the data is proposed based on a theoretical analysis. To evaluate the feasibility of this scheme, two types of experiments are carried out: one is implemented with simulated EIT data, and the other is performed with human brain EIT data under visual stimulation. The experimental results demonstrate that: SPM is able to localise the expected ROI in EIT data correctly; and it is reasonable to use the balloon hemodynamic change model to simulate the impedance change during brain function activity. Secondly, to deal with the absence of human morphology information in EIT visualisation, an innovative landmark-based registration scheme is developed to register brain EIT image with a standard anatomical brain atlas. Finally, a new task typology model is derived for task exploration in medical image visualisation, and a task-based system development methodology is proposed for the visualisation of multi-dimensional medical images. As a case study, a prototype visualisation system, named EIT5DVis, has been developed, following this methodology. to visualise five-dimensional brain EIT data. The EIT5DVis system is able to accept visualisation tasks through a graphical user interface; apply appropriate methods to analyse tasks, which include the ROI detection approach and registration scheme mentioned in the preceding paragraphs; and produce various visualisations

    Q(sqrt(-3))-Integral Points on a Mordell Curve

    Get PDF
    We use an extension of quadratic Chabauty to number fields,recently developed by the author with Balakrishnan, Besser and M ̈uller,combined with a sieving technique, to determine the integral points overQ(√−3) on the Mordell curve y2 = x3 − 4
    • …
    corecore