17 research outputs found

    Interacting Components

    Get PDF
    SystemCSP is a graphical modeling language based on both CSP and concepts of component-based software development. The component framework of SystemCSP enables specification of both interaction scenarios and relative execution ordering among components. Specification and implementation of interaction among participating components is formalized via the notion of interaction contract. The used approach enables incremental design of execution diagrams by adding restrictions in different interaction diagrams throughout the process of system design. In this way all different diagrams are related into a single formally verifiable system. The concept of reusable formally verifiable interaction contracts is illustrated by designing set of design patterns for typical fault tolerance interaction scenarios

    CSP and Real-Time: Reality or Illusion?

    Get PDF

    Heliostat field aiming strategies for solar central receivers

    Get PDF
    Mención Internacional en el título de doctorLa presente tesis está dedicada al desarrollo de modelos ópticos aplicados a la tecnología de central solar tipo torre. Más concretamente, este trabajo se centra en el modelado de mapas de flujo y estrategias de apuntamiento para sistemas de receptor central. Los códigos resultantes son de utilidad en las fases de diseño y operación de centrales solares de torre. Esta memoria fundamentalmente presenta cuatro modelos computacionales. El primer modelo, sobre el cual se construyen el resto de modelos, calcula la distribución de densidad de flujo incidente en cualquier tipo de receptor central y que es causada por un único heliostato. El procedimiento se basa en la proyección oblicua de la malla de cálculo desde el receptor hasta el plano imagen, en donde se evalúa una función analítica de precisión conocida, e.g. UNIZAR. La proyección oblicua se obtiene mediante un adecuado cambio de sistemas de coordenadas. El método de proyección reproduce notablemente la distorsión presente en la mancha de luz concentrada cuando el ángulo de incidencia con el receptor es elevado. Este modelo básico ha sido validado con medidas de distribución de flujo en un receptor plano y con simulaciones de Monte Carlo de trazado de rayos para un receptor cilíndrico. En comparación con SolTrace, el modelo propuesto requiere un tiempo de computación 50 veces inferior y con un nivel de resolución aún mayor. El segundo modelo determina los errores de canteo en las facetas de heliostatos reales. En base a un algoritmo de optimización determinista, se ha establecido un procedimiento que ajusta los mapas de flujo simulados con las imágenes tomadas en un blanco lambertiano. Se han empleado imágenes experimentales tomadas en la planta THEMIS para encontrar los errores de reglaje en tres heliostatos CETHEL seleccionados. A partir de los resultados del modelo, uno de los heliostatos ha sido satisfactoriamente reajustado, mejorando de forma significativa su calidad óptica y validando la metodología propuesta. El tercer modelo es una ampliación del primero de ellos para superponer los mapas de flujo producidos por cada uno de los heliostatos en un campo completo. Las pérdidas ópticas por sombras y bloqueos se calculan mediante proyección paralela de los heliostatos vecinos. Se ha desarrollado una estrategia de apuntamiento que da lugar a mapas de flujo simétricos respecto de la línea media ecuatorial del receptor y que depende de un solo parámetro: k, factor de apuntamiento. Con k = 3 se obtienen mapas de flujo similares a los de apuntamiento simple al ecuador, mientras que con k = 0 los heliostatos apuntan a los bordes inferior y superior del receptor. Para el caso de estudio basado en Gemasolar, un factor de apuntamiento igual a 2 da lugar a las distribuciones de flujo más uniformes, i.e. perfil plano en la región central, sin menoscabo del factor de intercepción en comparación con el apuntamiento ecuatorial simple. En el cuarto de los modelos se ha implementado una estrategia de apuntamiento óptima para receptores centrales de sales fundidas. Se ha desarrollado un algoritmo que maximiza la potencia térmica instantánea del receptor, al mismo tiempo que se cumplen sus límites de operación. Los límites de corrosión y estrés térmico se traducen en flujos máximos admisibles; AFD, por sus siglas en inglés. En comparación con el apuntamiento simple, habitualmente inviable, la estrategia de apuntamiento optimizado asegura la integridad del receptor, a la vez que las pérdidas por desbordamiento sólo se incrementan en 4 puntos porcentuales. Se ha comprobado que la posición óptima de los apuntamientos en cada panel se encuentra, en promedio, ligeramente desplazada hacia el lado de entrada de las sales. A pesar de los requisitos contradictorios entre paneles adyacentes de receptores multi-panel con flujo de sales en serpentín, el algoritmo consigue un buen ajuste al perfil AFD instantáneo. El código resultante requiere alrededor de 2 minutos de cálculo en una computadora estándar para determinar los apuntamientos óptimos en un campo de 2650 heliostatos.This thesis deals with the development of optical models for solar power tower technology. Specifically, this work is focused on modeling flux mapping and aiming strategies for central receiver systems (CRS). The resulting codes are applicable to CRS design and operation. This dissertation essentially presents four computational models. The first model, on which the rest of the models are built up, computes the flux density distribution incident on any kind of central receiver which is caused by a single heliostat. The procedure relies on the oblique projection of the receiver mesh onto the image plane, where an accurate analytic function, e.g. UNIZAR, is evaluated. Oblique projection is accomplished by transformation of coordinate systems. The 4-step projection method remarkably reproduces the distorted spot found for large incidence angles on the heliostat and the receiver. This basic model was validated against flux measurements on a flat receiver and Monte Carlo Ray Tracing simulations on a cylindrical receiver. Compared to SolTrace, the model takes 50 times less computation time and higher level of resolution. The second model was developed to determine canting errors in the facets of real heliostats. Based on a deterministic optimization algorithm, a procedure was set up to minimize the difference between computed flux maps and captured images on a lambertian target. Experimental images from THEMIS plant were employed to find out canting errors in selected CETHEL heliostats. From results of the model, one of the heliostats was successfully readjusted, significantly improving its optical quality, and validating the proposed methodology. The third model extends the basic model to superpose single heliostat flux maps in a whole field of heliostats. Shading and blocking losses are computed by parallel projection of neighbor heliostats. An aiming strategy, symmetric about the receiver equator, was developed on the basis of only one parameter: k, aiming factor. Nearly single equatorial aiming is achieved with k = 3, while k = 0 results in pointing to either upper or lower receiver edges. For the Gemasolar case study, an aiming factor equal to 2 yielded the most uniform flux maps, i.e. flat profile in the central region, and negligible increase in spillage losses compared to equatorial aiming. An optimal aiming strategy for molten salt receivers was implemented in the fourth model. An algorithm was developed to maximize receiver thermal output, while meeting at the same time corrosion and thermal stress limits; which were translated into allowable flux densities, AFD. Compared to unreliable single aiming, the optimized aiming strategy ensures receiver integrity and spillage losses only increase up to 4 percentage points. It was found that optimal aim points are, on average, slightly shifted towards the panel entrance. Despite the conflicting demand between adjacent panels in multi-panel receivers with serpentine flow pattern, the fit algorithm performs noticeable matching to the AFD profile. The resulting code takes around 2 minutes in a standard PC to compute the optimal aim points for a field made up of 2650 heliostats.Programa Oficial de Doctorado en Ingeniería Mecánica y de Organización IndustrialPresidente: Manuel Jesús Blanco Muriel.- Secretario: Manuel Romero Álvarez.- Vocal: Francisco Javier Collado Giméne

    ProcessJ: A process-oriented programming language

    Full text link
    Java is a general purpose object-oriented programming language that has been widely adopted. Because of its high adoption rate and its lineage as a C-style language, its syntax is familiar to many programmers. The downside is that Java is not natively concurrent. Volumes have been written about concurrent programming in Java; however, concurrent programming is difficult to reason about within an object-oriented paradigm and so is difficult to get right. occam -π is a general purpose process-oriented programming language. Concurrency is part of the theoretical underpinnings of the language. Concurrency is simple to reason about within an occam -π application because there is never any shared state; also occam -π is based on a process calculus, with algebraic laws for composing processes. It has well-defined semantics regarding how processes interact. The downside is that the syntax is foreign and even archaic to programmers who are used to the Java syntax. This thesis presents a new language, ProcessJ, which is a general purpose, process-oriented programming language meant to bridge the gap between Java and occam -π. ProcessJ does this by combining the familiar syntax of Java with the process semantics of occam -π. This allows for a familiar-looking language that is easy to reason about in concurrent programs. This thesis describes the ProcessJ language, as well as the implementation of a compiler that translates ProcessJ source code to Java with Java Communicating Sequential Processes (JCSP), a library that provides CSP-style communication primitives

    Deep learning techniques to bridge the gap between 2D and 3D ultrasound imaging

    Get PDF
    Three-dimensional (3D) ultrasound imaging has contributed to our understanding of fetal developmental processes in the womb by providing rich contextual information of the inherently 3D anatomies. However, its use is limited in clinical settings, due to the high purchasing costs and limited diagnostic practicality. Freehand two-dimensional (2D) ultrasound imaging, in contrast, is routinely used in standard obstetric exams. The low cost and portability of 2D ultrasound render it uniquely suitable for use in low- and middle-income settings. However, high level of expertise is always involved and it inherently lacks a 3D representation of the anatomies, which limit its potential for more accessible and advanced assessment. Capitalizing on the flexibility offered by freehand 2D ultrasound acquisition, this thesis presents a deep learning-based framework for optimizing the utilization and diagnostic power of 2D freehand ultrasound in fetal brain imaging. First, a localization model is presented to predict the location of 2D ultrasound fetal brain scans in the 3D brain atlas. It is trained by sampling 2D slices from aligned 3D fetal brain volumes, such that heavy annotations for each 2D scan are not required. This can be used for scanning guidance and standard plane localization. An unsupervised methodology is further proposed to adapt a trained localization model to freehand 2D ultrasound images acquired from arbitrary domains, for example sonographers, manufacturers and acquisition protocols. This enables the model to be used at the bedside in practice, where it can be fine-tuned with just the images acquired in any arbitrary domains before inference. Building upon the ability to localize 2D scans in the 3D brain atlas, a framework is further presented to reconstruct 3D volumes from non-sensor-tracked 2D ultrasound images using implicit representation. With this slice-to-volume reconstruction framework, additional 3D information can be extracted from the 2D freehand scans. Finally, a semi-automatic model, trained only on raw 3D volumes without any manual annotation, is presented to segment any arbitrary structures of interest in 3D medical volumes, while only requiring manual annotation of a single slice during inference. The model is tested on wide variety of medical imaging datasets and anatomical structures, verifying its generalizability. In the design of the framework presented in this thesis, three fundamental principles, namely minimal human annotation, generalizability and sensorless operation, are followed to optimize its seamless integration into the clinical workflow. This may modernize freehand routine scanning and enhance its accessibility, while maximizing the clinical information gained from routine scans acquired as part of the continuum of pregnancy care

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems

    Study of Adaptation Methods Towards Advanced Brain-computer Interfaces

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Engineering Agile Big-Data Systems

    Get PDF
    To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, organisational policies, and the structure and interpretation of the data they hold. Manual customisation is expensive, time-consuming, and error-prone. In large complex systems, the value of the data can be such that exhaustive testing is necessary before any new feature can be added to the existing design. In most cases, the precise details of requirements, policies and data will change during the lifetime of the system, forcing a choice between expensive modification and continued operation with an inefficient design.Engineering Agile Big-Data Systems outlines an approach to dealing with these problems in software and data engineering, describing a methodology for aligning these processes throughout product lifecycles. It discusses tools which can be used to achieve these goals, and, in a number of case studies, shows how the tools and methodology have been used to improve a variety of academic and business systems
    corecore