33 research outputs found

    An automated OpenCL FPGA compilation framework targeting a configurable, VLIW chip multiprocessor

    Get PDF
    Modern system-on-chips augment their baseline CPU with coprocessors and accelerators to increase overall computational capacity and power efficiency, and thus have evolved into heterogeneous systems. Several languages have been developed to enable this paradigm shift, including CUDA and OpenCL. This thesis discusses a unified compilation environment to enable heterogeneous system design through the use of OpenCL and a customised VLIW chip multiprocessor (CMP) architecture, known as the LE1. An LLVM compilation framework was researched and a prototype developed to enable the execution of OpenCL applications on the LE1 CPU. The framework fully automates the compilation flow and supports work-item coalescing to better utilise the CPU cores and alleviate the effects of thread divergence. This thesis discusses in detail both the software stack and target hardware architecture and evaluates the scalability of the proposed framework on a highly precise cycle-accurate simulator. This is achieved through the execution of 12 benchmarks across 240 different machine configurations, as well as further results utilising an incomplete development branch of the compiler. It is shown that the problems generally scale well with the LE1 architecture, up to eight cores, when the memory system becomes a serious bottleneck. Results demonstrate superlinear performance on certain benchmarks (x9 for the bitonic sort benchmark with 8 dual-issue cores) with further improvements from compiler optimisations (x14 for bitonic with the same configuration

    Increasing the Performance and Predictability of the Code Execution on an Embedded Java Platform

    Get PDF
    This thesis explores the execution of object-oriented code on an embedded Java platform. It presents established and derives new approaches for the implementation of high-level object-oriented functionality and commonly expected system services. The goal of the developed techniques is the provision of the architectural base for an efficient and predictable code execution. The research vehicle of this thesis is the Java-programmed SHAP platform. It consists of its platform tool chain and the highly-customizable SHAP bytecode processor. SHAP offers a fully operational embedded CLDC environment, in which the proposed techniques have been implemented, verified, and evaluated. Two strands are followed to achieve the goal of this thesis. First of all, the sequential execution of bytecode is optimized through a joint effort of an optimizing offline linker and an on-chip application loader. Additionally, SHAP pioneers a reference coloring mechanism, which enables a constant-time interface method dispatch that need not be backed a large sparse dispatch table. Secondly, this thesis explores the implementation of essential system services within designated concurrent hardware modules. This effort is necessary to decouple the computational progress of the user application from the interference induced by time-sharing software implementations of these services. The concrete contributions comprise a spill-free, on-chip stack; a predictable method cache; and a concurrent garbage collection. Each approached means is described and evaluated after the relevant state of the art has been reviewed. This review is not limited to preceding small embedded approaches but also includes techniques that have proven successful on larger-scale platforms. The other way around, the chances that these platforms may benefit from the techniques developed for SHAP are discussed

    Towards Language-Oriented Modeling

    Get PDF
    In this habilitation à diriger des recherches (HDR), I review a decade of research work in the fields of Model-Driven Engineering (MDE) and Software Language Engineering (SLE). I propose contributions to support a language-oriented modeling, with the particular focus on enabling early validation & verification (V&V) of software-intensive systems. I first present foundational concepts and engineering facilities which help to capture the core domain knowledge into the various heterogeneous concerns of DSMLs (aka. metamodeling in the small), with a particular focus on executable DSMLs to automate the development of dynamic V&V tools. Then, I propose structural and behavioral DSML interfaces, and associated composition operators to reuse and integrate multiple DSMLs (aka. metamodeling in the large).In these research activities I explore various breakthroughs in terms of modularity and reusability of DSMLs. I also propose an original approach which bridges the gap between the concurrency theory and the algorithm theory, to integrate a formal concurrency model into the execution semantics of DSMLs. All the contributions have been implemented in software platforms — the language workbench Melange and the GEMOC studio – and experienced in real-world case studies to assess their validity. In this context, I also founded the GEMOC initiative, an attempt to federate the community on the grand challenge of the globalization of modeling languages

    Analyse de la sécurité de systÚmes critiques embarqués à forte composante logicielle par interprétation abstraite

    Get PDF
    This thesis is dedicated to the analysis of low-level software, like operating systems, by abstract interpretation. Analyzing OSes is a crucial issue to guarantee the safety of software systems since they are the layer immediately above the hardware and that all applicative tasks rely on them. For critical applications, we want to prove that the OS does not crash, and that it ensures the isolation of programs, so that an untrusted program cannot disrupt a trusted one. The analysis of this kind of programs raises specific issues. This is because OSes must control hardware using instructions that are meaningless in ordinary programs. In addition, because hardware features are outside the scope of C, source code includes assembly blocks mixed with C code. These are the two main axes in this thesis: handling mixed C and assembly, and precise abstraction of instructions that are specific to low-level software. This work is motivated by the analysis of a case study emanating from an industrial partner, which required the implementation of proposed methods in the static analyzer AstrĂ©e. The first part is about the formalization of a language mixing simplified models of C and assembly, from syntax to semantics. This specification is crucial to define what is legal and what is a bug, while taking into account the intricacy of interactions of C and assembly, in terms of data flow and control flow. The second part is a short introduction to abstract interpretation focusing on what is useful thereafter. The third part proposes an abstraction of the semantics of mixed C and assembly. This is actually a series of parametric abstractions handling each aspect of the semantics. The fourth part is interested in the question of the abstraction of instructions specific to low-level software. Interest properties can easily be proven using ghost variables, but because of technical reasons, it is difficult to design a reduced product of abstract domains that allows a satisfactory handling of ghost variables. This part builds such a general framework with domains that allow us to solve our problem and many others. The final part details properties to prove in order to guarantee isolation of programs that have not been treated since they raise many complicated questions. We also give some suggestions to improve the product of domains with ghost variables introduced in the previous part, in terms of features and performances.Cette thĂšse est consacrĂ©e Ă  l'analyse de logiciels de bas niveau, tels que les systĂšmes d'exploitation, par interprĂ©tation abstraite. L'analyse des OS est une question importante pour garantir la sĂ»retĂ© des systĂšmes logiciels puisqu'ils forment le niveau immĂ©diatement au-dessus du matĂ©riel et que toutes les tĂąches applicatives dĂ©pendent d'eux. Pour des applications critiques, on veut s'assurer que l'OS ne plante pas, mais aussi qu'il assure l'isolation des programmes, de sorte qu'un programme dont la fiabilitĂ© n'a pas Ă©tĂ© Ă©tablie ne puisse perturber un programme de confiance. L'analyse de ce genre de programmes soulĂšve des problĂšmes spĂ©cifiques. Cela provient du fait que les OS doivent contrĂŽler le matĂ©riel avec des opĂ©rations qui n'ont pas de sens dans un programme ordinaire. De plus, comme les fonctionnalitĂ©s matĂ©rielles sont en dehors du for du C, le code source contient des blocs de code assembleur mĂȘlĂ©s au C. Ce sont les deux axes de cette thĂšse : gĂ©rer les mĂ©langes de C et d'assembleur, et abstraire finement les opĂ©rations spĂ©cifiques aux logiciels de bas niveau. Ce travail est guidĂ© par l'analyse d'un cas d'Ă©tude d'un partenaire industriel, ce qui a nĂ©cessitĂ© l'implĂ©mentation des mĂ©thodes proposĂ©es dans l'analyseur statique AstrĂ©e. La premiĂšre partie s'intĂ©resse Ă  la formalisation d'un langage mĂ©langeant des modĂšles simplifiĂ©s du C et de l'assembleur, depuis la syntaxe jusqu'Ă  la sĂ©mantique. Cette spĂ©cification est importante pour dĂ©finir ce qui est lĂ©gal et ce qui constitue une erreur, tout en tenant compte de la complexitĂ© des interactions du C et de l'assembleur, tant en termes de donnĂ©es que de flot de contrĂŽle. La seconde partie est une introduction sommaire Ă  l'interprĂ©tation abstraite qui se limite Ă  ce qui est utile par la suite. La troisiĂšme partie propose une abstraction de la sĂ©mantique des mĂ©langes de C et d'assembleur. Il s'agit en fait d'une collection d'abstractions paramĂ©triques qui gĂšrent chacun des aspects de cette sĂ©mantique. La quatriĂšme partie s'intĂ©resse Ă  l'abstraction des opĂ©rations spĂ©cifiques aux logiciels de bas niveau. Les propriĂ©tĂ©s d'intĂ©rĂȘt peuvent ĂȘtre facilement prouvĂ©es Ă  l'aide de variables fantĂŽmes, mais pour des raisons techniques, il est difficile de concevoir un produit rĂ©duit de domaines abstraits qui supporte une gestion satisfaisante des variables fantĂŽmes. Cette partie construit un tel cadre trĂšs gĂ©nĂ©ral ainsi que des domaines qui permettent de rĂ©soudre beaucoup de problĂšmes dont le nĂŽtre. L'ultime partie prĂ©sente quelques propriĂ©tĂ©s Ă  prouver pour garantir l'isolation des programmes, qui n'ont pas Ă©tĂ© traitĂ©es, car elles posent de nouvelles et complexes questions. On donne aussi quelques propositions d'amĂ©lioration du produit de domaines avec variables fantĂŽmes introduit dans la partie prĂ©cĂ©dente, tant en termes de fonctionnalitĂ©s que de performances

    Contextually-Dependent Lexical Semantics

    Get PDF
    Institute for Communicating and Collaborative SystemsThis thesis is an investigation of phenomena at the interface between syntax, semantics, and pragmatics, with the aim of arguing for a view of semantic interpretation as lexically driven yet contextually dependent. I examine regular, generative processes which operate over the lexicon to induce verbal sense shifts, and discuss the interaction of these processes with the linguistic or discourse context. I concentrate on phenomena where only an interaction between all three linguistic knowledge sources can explain the constraints on verb use: conventionalised lexical semantic knowledge constrains productive syntactic processes, while pragmatic reasoning is both constrained by and constrains the potential interpretations given to certain verbs. The phenomena which are closely examined are the behaviour of PP sentential modifiers (specifically dative and directional PPs) with respect to the lexical semantic representation of the verb phrases they modify, resultative constructions, and logical metonymy. The analysis is couched in terms of a lexical semantic representation drawing on Davis (1995), Jackendoff (1983, 1990), and Pustejovsky (1991, 1995) which aims to capture “linguistically relevant” components of meaning. The representation is shown to have utility for modeling of the interaction between the syntactic form of an utterance and its meaning. I introduce a formalisation of the representation within the framework of Head Driven Phrase Structure Grammar (Pollard and Sag 1994), and rely on the model of discourse coherence proposed by Lascarides and Asher (1992), Discourse in Commonsense Entailment. I furthermore discuss the implications of the contextual dependency of semantic interpretation for lexicon design and computational processing in Natural Language Understanding systems

    Machine Learning Techniques and Optical Systems for Iris Recognition from Distant Viewpoints

    Get PDF
    Vorhergehende Studien konnten zeigen, dass es im Prinzip möglich ist die Methode der Iriserkennung als biometrisches Merkmal zur Identifikation von Fahrern zu nutzen. Die vorliegende Arbeit basiert auf den Resultaten von [35], welche ebenfalls als Ausgangspunkt dienten und teilweise wiederverwendet wurden. Das Ziel dieser Dissertation war es, die Iriserkennung in einem automotiven Umfeld zu etablieren. Das einzigartige Muster der Iris, welches sich im Laufe der Zeit nicht verĂ€ndert, ist der Grund, warum die Methode der Iriserkennung eine der robustesten biometrischen Erkennungsmethoden darstellt. Um eine Datenbasis fĂŒr die LeistungsfĂ€higkeit der entwickelten Lösung zu schaffen, wurde eine automotive Kamera benutzt, die mit passenden NIR-LEDs vervollstĂ€ndigt wurde, weil Iriserkennung am Besten im nahinfraroten Bereich (NIR) durchgefĂŒhrt wird. Da es nicht immer möglich ist, die aufgenommenen Bilder direkt weiter zu verabeiten, werden zu Beginn einige Techniken zur Vorverarbeitung diskutiert. Diese verfolgen sowohl das Ziel die QualitĂ€t der Bilder zu erhöhen, als auch sicher zu stellen, dass lediglich Bilder mit einer akzeptablen QualitĂ€t verarbeitet werden. Um die Iris zu segmentieren wurden drei verschiedene Algorithmen implementiert. Dabei wurde auch eine neu entwickelte Methode zur Segmentierung in der polaren ReprĂ€sentierung eingefĂŒhrt. ZusĂ€tzlich können die drei Techniken von einem "Snake Algorithmus", einer aktiven Kontur Methode, unterstĂŒtzt werden. FĂŒr die Entfernung der Augenlider und Wimpern aus dem segmentierten Bereich werden vier AnsĂ€tze prĂ€sentiert. Um abzusichern, dass keine Segmentierungsfehler unerkannt bleiben, sind zwei Optionen eines SegmentierungsqualitĂ€tschecks angegeben. Nach der Normalisierung mittels "Rubber Sheet Model" werden die Merkmale der Iris extrahiert. Zu diesem Zweck werden die Ergebnisse zweier Gabor Filter verglichen. Der SchlĂŒssel zu erfolgreicher Iriserkennung ist ein Test der statistischen UnabhĂ€ngigkeit. Dabei dient die Hamming Distanz als Maß fĂŒr die Unterschiedlichkeit zwischen der Phaseninformation zweier Muster. Die besten Resultate fĂŒr die benutzte Datenbasis werden erreicht, indem die Bilder zunĂ€chst einer SchĂ€rfeprĂŒfung unterzogen werden, bevor die Iris mittels der neu eingefĂŒhrten Segmentierung in der polaren ReprĂ€sentierung lokalisiert wird und die Merkmale mit einem 2D-Gabor Filter extrahiert werden. Die zweite biometrische Methode, die in dieser Arbeit betrachtet wird, benutzt die Merkmale im Bereich der die Iris umgibt (periokular) zur Identifikation. Daher wurden mehrere Techniken fĂŒr die Extraktion von Merkmalen und deren Klassifikation miteinander verglichen. Die Erkennungsleistung der Iriserkennung und der periokularen Erkennung, sowie die Fusion der beiden Methoden werden mittels Quervergleichen der aufgenommenen Datenbank gemessen und ĂŒbertreffen dabei deutlich die Ausgangswerte aus [35]. Da es immer nötig ist biometrische Systeme gegen Manipulation zu schĂŒtzen, wird zum Abschluss eine Technik vorgestellt, die es erlaubt, Betrugsversuche mittels eines Ausdrucks zu erkennen. Die Ergebnisse der vorliegenden Arbeit zeigen, dass es zukĂŒnftig möglich ist biometrische Merkmale anstelle von AutoschlĂŒsseln einzusetzen. Auch wegen dieses großen Erfolges wurden die Ergebnisse bereits auf der Consumer Electronics Show (CES) im Jahr 2018 in Las Vegas vorgestellt
    corecore