29 research outputs found

    Behavioural model debugging in Linda

    Get PDF
    This thesis investigates event-based behavioural model debugging in Linda. A study is presented of the Linda parallel programming paradigm, its amenability to debugging, and a model for debugging Linda programs using Milner's CCS. In support of the construction of expected behaviour models, a Linda program specification language is proposed. A behaviour recognition engine that is based on such specifications is also discussed. It is shown that Linda's distinctive characteristics make it amenable to debugging without the usual problems associated with paraUel debuggers. Furthermore, it is shown that a behavioural model debugger, based on the proposed specification language, effectively exploits the debugging opportunity. The ideas developed in the thesis are demonstrated in an experimental Modula-2 Linda system

    Un framework pour l'exécution efficace d'applications sur GPU et CPU+GPU

    Get PDF
    Technological limitations faced by the semi-conductor manufacturers in the early 2000's restricted the increase in performance of the sequential computation units. Nowadays, the trend is to increase the number of processor cores per socket and to progressively use the GPU cards for highly parallel computations. Complexity of the recent architectures makes it difficult to statically predict the performance of a program. We describe a reliable and accurate parallel loop nests execution time prediction method on GPUs based on three stages: static code generation, offline profiling, and online prediction. In addition, we present two techniques to fully exploit the computing resources at disposal on a system. The first technique consists in jointly using CPU and GPU for executing a code. In order to achieve higher performance, it is mandatory to consider load balance, in particular by predicting execution time. The runtime uses the profiling results and the scheduler computes the execution times and adjusts the load distributed to the processors. The second technique, puts CPU and GPU in a competition: instances of the considered code are simultaneously executed on CPU and GPU. The winner of the competition notifies its completion to the other instance, implying the termination of the latter.Les verrous technologiques rencontrés par les fabricants de semi-conducteurs au début des années deux-mille ont abrogé la flambée des performances des unités de calculs séquentielles. La tendance actuelle est à la multiplication du nombre de cœurs de processeur par socket et à l'utilisation progressive des cartes GPU pour des calculs hautement parallèles. La complexité des architectures récentes rend difficile l'estimation statique des performances d'un programme. Nous décrivons une méthode fiable et précise de prédiction du temps d'exécution de nids de boucles parallèles sur GPU basée sur trois étapes : la génération de code, le profilage offline et la prédiction online. En outre, nous présentons deux techniques pour exploiter l'ensemble des ressources disponibles d'un système pour la performance. La première consiste en l'utilisation conjointe des CPUs et GPUs pour l'exécution d'un code. Afin de préserver les performances il est nécessaire de considérer la répartition de charge, notamment en prédisant les temps d'exécution. Le runtime utilise les résultats du profilage et un ordonnanceur calcule des temps d'exécution et ajuste la charge distribuée aux processeurs. La seconde technique présentée met le CPU et le GPU en compétition : des instances du code cible sont exécutées simultanément sur CPU et GPU. Le vainqueur de la compétition notifie sa complétion à l'autre instance, impliquant son arrêt

    Mining a Small Medical Data Set by Integrating the Decision Tree and t-test

    Get PDF
    [[abstract]]Although several researchers have used statistical methods to prove that aspiration followed by the injection of 95% ethanol left in situ (retention) is an effective treatment for ovarian endometriomas, very few discuss the different conditions that could generate different recovery rates for the patients. Therefore, this study adopts the statistical method and decision tree techniques together to analyze the postoperative status of ovarian endometriosis patients under different conditions. Since our collected data set is small, containing only 212 records, we use all of these data as the training data. Therefore, instead of using a resultant tree to generate rules directly, we use the value of each node as a cut point to generate all possible rules from the tree first. Then, using t-test, we verify the rules to discover some useful description rules after all possible rules from the tree have been generated. Experimental results show that our approach can find some new interesting knowledge about recurrent ovarian endometriomas under different conditions.[[journaltype]]國外[[incitationindex]]EI[[booktype]]紙本[[countrycodes]]FI

    Technology Made Legible: A Cultural Study of Software as a Form of Writing in the Theories and Practices of Software Engineering

    Get PDF
    My dissertation proposes an analytical framework for the cultural understanding of the group of technologies commonly referred to as 'new' or 'digital'. I aim at dispelling what the philosopher Bernard Stiegler calls the 'deep opacity' that still surrounds new technologies, and that constitutes one of the main obstacles in their conceptualization today. I argue that such a critical intervention is essential if we are to take new technologies seriously, and if we are to engage with them on both the cultural and the political level. I understand new technologies as technologies based on software. I therefore suggest that a complex understanding of technologies, and of their role in contemporary culture and society, requires, as a preliminary step, an investigation of how software works. This involves going beyond studying the intertwined processes of its production, reception and consumption - processes that typically constitute the focus of media and cultural studies. Instead, I propose a way of accessing the ever present but allegedly invisible codes and languages that constitute software. I thus reformulate the problem of understanding software-based technologies as a problem of making software legible. I build my analysis on the concept of software advanced by Software Engineering, a technical discipline born in the late 1960s that defines software development as an advanced writing technique and software as a text. This conception of software enables me to analyse it through a number of reading strategies. I draw on the philosophical framework of deconstruction as formulated by Jacques Derrida in order to identify the conceptual structures underlying software and hence 'demystify' the opacity of new technologies. Ultimately, I argue that a deconstructive reading of software enables us to recognize the constitutive, if unacknowledged, role of technology in the formation of both the human and academic knowledge. This reading leads to a self-reflexive interrogation of the media and cultural studies' approach to technology and enhances our capacity to engage with new technologies without separating our cultural understanding from our political practices

    Application of novel technologies for the development of next generation MR compatible PET inserts

    Get PDF
    Multimodal imaging integrating Positron Emission Tomography and Magnetic Resonance Imaging (PET/MRI) has professed advantages as compared to other available combinations, allowing both functional and structural information to be acquired with very high precision and repeatability. However, it has yet to be adopted as the standard for experimental and clinical applications, due to a variety of reasons mainly related to system cost and flexibility. A hopeful existing approach of silicon photodetector-based MR compatible PET inserts comprised by very thin PET devices that can be inserted in the MRI bore, has been pioneered, without disrupting the market as expected. Technological solutions that exist and can make this type of inserts lighter, cost-effective and more adaptable to the application need to be researched further. In this context, we expand the study of sub-surface laser engraving (SSLE) for scintillators used for PET. Through acquiring, measuring and calibrating the use of a SSLE setting we study the effect of different engraving configurations on detection characteristics of the scintillation light by the photosensors. We demonstrate that apart from cost-effectiveness and ease of application, SSLE treated scintillators have similar spatial resolution and superior sensitivity and packing fraction as compared to standard pixelated arrays, allowing for shorter crystals to be used. Flexibility of design is benchmarked and adoption of honeycomb architecture due to geometrical advantages is proposed. Furthermore, a variety of depth-of-interaction (DoI) designs are engraved and studied, greatly enhancing applicability in small field-of-view tomographs, such as the intended inserts. To adapt to this need, a novel approach for multi-layer DoI characterization has been developed and is demonstrated. Apart from crystal treatment, considerations on signal transmission and processing are addressed. A double time-over-threshold (ToT) method is proposed, using the statistics of noise in order to enhance precision. This method is tested and linearity results demonstrate applicability for multiplexed readout designs. A study on analog optical wireless communication (aOWC) techniques is also performed and proof of concept results presented. Finally, a ToT readout firmware architecture, intended for low-cost FPGAs, has been developed and is described. By addressing the potential development, applicability and merits of a range of transdisciplinary solutions, we demonstrate that with these techniques it is possible to construct lighter, smaller, lower consumption, cost-effective MRI compatible PET inserts. Those designs can make PET/MRI multimodality the dominant clinical and experimental imaging approach, enhancing researcher and physician insight to the mysteries of life.La combinación multimodal de Tomografía por Emisión de Positrones con la Imagen de Resonancia Magnética (PET/MRI, de sus siglas en inglés) tiene clara ventajas en comparación con otras técnicas multimodales actualmente disponibles, dada su capacidad para registrar información funcional e información estructural con mucha precisión y repetibilidad. Sin embargo, esta técnica no acaba de penetrar en la práctica clínica debido en gran parte a alto coste. Las investigaciones que persiguen mejorar el desarrollo de insertos de PET basados en fotodetectores de silicio y compatibles con MRI, aunque han sido intensas y han generado soluciones ingeniosas, todavía no han conseguido encontrar las soluciones que necesita la industria. Sin embargo, existen opciones todavía sin explorar que podrían ayudar a evolucionar este tipo de insertos consiguiendo dispositivos más ligeros, baratos y con mejores prestaciones. Esta tesis profundiza en el estudio de grabación sub-superficie con láser (SSLE) para el diseño de los cristales centelladores usados en los sistemas PET. Para ello hemos caracterizado, medido y calibrado un procedimiento SSLE, y a continuación hemos estudiado el efecto que tienen sobre las especificaciones del detector las diferentes configuraciones del grabado. Demostramos que además de la rentabilidad y facilidad de uso de esta técnica, los centelladores SSLE tienen resolución espacial equivalente y sensibilidad y fracción de empaquetamiento superiores a las matrices de centelleo convencionales, lo que posibilita utilizar cristales más cortos para conseguir la misma sensibilidad. Estos diseños también permiten medir la profundidad de la interacción (DoI), lo que facilita el uso de estos diseños en tomógrafos de radio pequeño, como pueden ser los sistemas preclínicos, los dedicados (cabeza o mama) o los insertos para MRI. Además de trabajar en el tratamiento de cristal de centelleo, hemos considerado nuevas aproximaciones al procesamiento y transmisión de la señal. Proponemos un método innovador de doble medida de tiempo sobre el umbral (ToT) que integra una evaluación de la estadística del ruido con el propósito de mejorar la precisión. El método se ha validado y los resultados demuestran su viabilidad de uso incluso en conjuntos de señales multiplexadas. Un estudio de las técnicas de comunicación óptica analógica e inalámbrica (aOWC) ha permitido el desarrollo de una nueva propuesta para comunicar las señales del detector PET insertado en el gantry a un el procesador de señal externo, técnica que se ha validado en un demostrador. Finalmente, se ha propuesto y demostrado una nueva arquitectura de análisis de señal ToT implementada en firmware en FPGAs de bajo coste. La concepción y desarrollo de estas ideas, así como la evaluación de los méritos de las diferentes soluciones propuestas, demuestran que con estas técnicas es posible construir insertos de PET compatibles con sistemas MRI, que serán más ligeros y compactos, con un reducido consumo y menor coste. De esta forma se contribuye a que la técnica multimodal PET/MRI pueda penetrar en la clínica, mejorando la comprensión que médicos e investigadores puedan alcanzar en su estudio de los misterios de la vida.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Andrés Santos Lleó.- Secretario: Luis Hernández Corporales.- Vocal: Giancarlo Sportell

    Will Work For Free: Examining the Biopolitics of Unwaged Immaterial Labour

    Get PDF
    According to Maurizio Lazzarato, Michael Hardt, and Antonio Negri, immaterial labour is biopolitical in that it purchases, commands, and comes to progressively control the communicative and affective capacities of immaterial workers. Drawing inspiration from Michel Foucault, the above authors argue that waged immaterial labour reshapes the subjectivities of workers by reorienting their communicative and affective capacities towards the prerogatives and desires of those persons who purchased the right to control them. In this way, it is biopolitical. Extending the concept of immaterial labour into the Web 2.0 era, Tiziana Terranova and Christian Fuchs, for instance, argue that all of the time and effort devoted to generating digital content on the Internet should also be considered a form of immaterial work. Taking into account the valuations of ‘free’ social networks, these authors emphasize the exploitative dimensions of unwaged immaterial work and, by doing so, broaden the concept of immaterial labour to include both its waged and unwaged variants. Neither, however, has attempted to understand the biopolitical dimensions of unwaged immaterial labour with any specificity. Thus, while Hardt and Negri examine the biopolitics of waged immaterial labour and Terranova and Fuchs examine the exploitative dimensions of unwaged immaterial labour, this thesis makes an original contribution to this body of theory by extending both lines of thinking and bridging the chasm between them. Taking Flickr as its primary exemplar, this thesis provides an empirical examination of the ways in which its members regard all of the time and effort they devote to their ‘labours of love.’ Flickr is a massively popular Web 2.0 photo-sharing social network that depends on the unwaged immaterial labour of its ‘users’ to generate all of the content that populates the network. Via reference to open-ended and semi-structured interviews conducted with members of Flickr, the biopolitics that guide and regulate the exploited work of this unwaged labour force are disclosed. The primary research question this thesis provides an answer to, then, is: if waged immaterial labour is biopolitical as numerous scholars have argued, then what are the biopolitics of the unwaged immaterial labour characteristic of Flickr and what kinds of subjectivities are being produced by them

    Debugging parallelized code using code liberation techniques

    No full text

    Algorithmic Sovereignty

    Get PDF
    This thesis describes a practice based research journey across various projects dealing with the design of algorithms, to highlight the governance implications in design choices made on them. The research provides answers and documents methodologies to address the urgent need for more awareness of decisions made by algorithms about the social and economical context in which we live. Algorithms consitute a foundational basis across different fields of studies: policy making, governance, art and technology. The ability to understand what is inscribed in such algorithms, what are the consequences of their execution and what is the agency left for the living world is crucial. Yet there is a lack of interdisciplinary and practice based literature, while specialised treatises are too narrow to relate to the broader context in which algorithms are enacted. This thesis advances the awareness of algorithms and related aspects of sovereignty through a series of projects documented as participatory action research. One of the projects described, Devuan, leads to the realisation of a new, worldwide renown operating system. Another project, "sup", consists of a minimalist approach to mission critical software and literate programming to enhance security and reliability of applications. Another project, D-CENT, consisted in a 3 year long path of cutting edge research funded by the EU commission on the emerging dynamics of participatory democracy connected to the technologies adopted by citizen organizations. My original contribution to knowledge lies within the function that the research underpinning these projects has on the ability to gain a better understanding of sociopolitical aspects connected to the design and management of algorithms. It suggests that we can improve the design and regulation of future public, private and common spaces which are increasingly governed by algorithms by understanding not only economical and legal implications, but also the connections between design choices and the sociopolitical context for their development and execution.Gruppo Cabass
    corecore