55 research outputs found

    A friendly notebook on Data Structures and Algorithms

    Get PDF
    The purpose of this document is to provide study material that can be used for independent study by the students of the subject ’Data Structures and Algorithms’. We have tried to write it in a student-friendly way that encourages students to learn as well as enjoy. The document reviews the main concepts of the subject providing clear examples to help students. Each chapter also proposes a set of exercises to reinforce students’ knowledge

    SecureQEMU: Emulation-based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    Get PDF
    This research presents an original emulation-based software protection scheme providing protection from reverse code engineering (RCE) and software exploitation using encrypted code execution and page-granularity code signing, respectively. Protection mechanisms execute in trusted emulators while remaining out-of-band of untrusted systems being emulated. This protection scheme is called SecureQEMU and is based on a modified version of Quick Emulator (QEMU) [5]. RCE is a process that uncovers the internal workings of a program. It is used during vulnerability and intellectual property (IP) discovery. To protect from RCE program code may have anti-disassembly, anti-debugging, and obfuscation techniques incorporated. These techniques slow the process of RCE, however, once defeated protected code is still comprehensible. Encryption provides static code protection, but encrypted code must be decrypted before execution. SecureQEMUs\u27 scheme overcomes this limitation by keeping code encrypted during execution. Software exploitation is a process that leverages design and implementation errors to cause unintended behavior which may result in security policy violations. Traditional exploitation protection mechanisms provide a blacklist approach to software protection. Specially crafted exploit payloads bypass these protection mechanisms. SecureQEMU provides a whitelist approach to software protection by executing signed code exclusively. Unsigned malicious code (exploits, backdoors, rootkits, etc.) remain unexecuted, therefore, protecting the system. SecureQEMUs\u27 cache mechanisms increase performance by 0.9% to 1.8% relative to QEMU. Emulation overhead for SecureQEMU varies from 1400% to 2100% with respect to native performance. SecureQEMUs\u27 performance increase is negligible with respect to emulation overhead. Dependent on risk management strategy, SecureQEMU\u27s protection benefits may outweigh emulation overhead

    Generic Data Acquisition and Instrument control System (GDAIS)

    Get PDF
    Premi GMV en l’àmbit de la Tecnologia Espacial al millor Projecte de Fi de Carrera d’Enginyeria de Telecomunicació (curs 2010-2011)English: Remote sensing instrument development usually includes a software interface to control the instrument and acquire data. Although there are a similarities among softwares, it is hardly ever reused, since it is not designed with reusability in mind. The goal of this project is to develop a multi-platform software system to control and acquire data from one or more instruments in a generic and adaptable way. Thus, in future instruments, it can be used directly or with some minor modifications. The main feature of this system, named Generic Data Acquisition and Instrument control System (GDAIS), consists in adapting to a wide variety of instruments with a simple configuration text file for each one. Furthermore, controlling multiple instruments in parallel and co-register their acquired data, having remote access to the data and being able to monitor the system status are key points of the design. To satisfy these system requirements, a modular architecture design has been developed. The system is divided in small parts, each responsible of an specific functionality. The main module, named GDAIS-core, communicates independently with each connected instrument and saves the received data. Acquired data is saved in the Hierarchical Data Format (HDF5) binary format, designed specially for remote sensing scientific data, which is directly compatible with the commonly used network Common Data Form v4 (netCDF-4) format. The other main module of the system is named GDAIS-control. Its job is to control and monitor GDAIS-core. In order to make this module accessible from anywhere, its user interface is implemented as a web page. Apart from these two main modules, two desktop applications are provided to help with the configuration of the system. The first one is used to create an instrument text descriptor, which defines its interaction, connection and parser. The second one is used to define text descriptor of a set of instruments that the system will be controlling. Due to its modular design, the system is very flexible and it allows to significantly change the implementation of some subsystem without requiring any modification to the other parts. It can be used in a wide range of applications, from controlling a single instrument to acquiring data from a network of several complex instruments and saving them all together. Furthermore, it can be operated as a file data converter, reading from a raw capture or text file and parsing it to store it in the more optimized and well-organized HDF5 format.Castellano: El desarrollo de cualquier nuevo instrumento de teledetección suele incluir una interfaz software para controlar el instrumento y adquirir datos. Aunque esta parte software es muy similar cada vez, no suele ser reutilitzada ya que no se diseña teniendo en cuenta esta idea. El objetivo de este proyecto es desarrollar un sistema software multi-plataforma para controlar y adquirir datos de uno o más instrumentos de forma genérica y adaptable, para así poder ser usado directamente o con alguna ligera modificación. La principal característica de este sistema, llamado Sistema Genérico de Adquisición de Datos y Control de Instrumentos, consiste en la capacidad de adaptarse a diferentes tipos de instrumentos con sólo un fichero de configuración para cada uno. Además, otros elementos importantes del diseño incluyen la posibilidad de controlar múltiples instrumentos en paralelo, guardando a la vez la información recibida de cada uno; permitir el acceso remoto a los datos capturados; y proporcionar una interfaz de monitorización del estado del sistema. Para que el sistema cumpla con todos estos requisitos, se ha diseñado una arquitectura modular. El sistema está dividido en múltiples bloques, cada uno responsable de una funcionalidad específica. El bloque principal, llamado GDAIS-core, se comunica independientemente con cada instrumento conectado y guarda los datos que recibe. Estos datos son guardados en el formato binario HDF5, diseñado especialmente para datos científicos de teledetección, y que es directamente compatible con otro formato muy usado, el netCDF-4. El otro bloque principal del sistema se llama GDAIS-control. Este se encarga de controlar y monitorizar el bloque GDAIS-core. Para facilitar el acceso a esta interfaz de control desde cualquier sitio, ha sido implementado como una aplicación web. Además de estos dos módulos principales, también se han creado dos aplicaciones gráficas de escritorio para ayudar con la configuración del sistema. La primera permite crear un fichero de texto con la descripción del instrumento y la segunda sirve para crear un fichero con la descripción de una combinación de instrumentos a controlar conjuntamente. Gracias a su diseño modular, el sistema es muy flexible y permite modificaciones importantes a cualquiera de sus sistemas sin tener que cambiar nada de las otras partes. Hay muchas aplicaciones posibles para este sistema, desde controlar un solo instrumento hasta adquirir datos de una red de instrumentos y guardarlo todo en un solo fichero. También se puede usar como conversor de ficheros, partiendo de un fichero de texto o binario de una captura, para obtener la misma información en un fichero con el formato HDF5, más optimizado y organizado.Català: El desenvolupament de qualsevol instrument de teledetecció acostuma a incloure una interfície software per controlar l'instrument i adquirir dades. Tot i que aquesta part software sol ser molt similar cada cop, no acostuma a ser reutilitzada, ja que no es dissenya tenint-ho en compte. L'objectiu d'aquest projecte és desenvolupar un sistema software multi-plataforma per controlar i adquirir dades d'un o més instruments de forma genèrica i adaptable, de manera que pugui ser utilitzat directament o amb alguna lleugera modificació. La principal característica del sistema, anomenat Sistema Genèric d'Adquisició de Dades i Control d'Instruments, consisteix en la capacitat d'adaptar-se a molts tipus diferents d'instruments amb un simple fitxer de configuració per a cada un. A més a més, altres punts importants del disseny són la possibilitat de controlar múltiples instruments en paral·lel, desant alhora la informació rebuda de cada un; permetre l'accés remot a les dades capturades; i proporcionar una interfície de monitorització de l'estat del sistema. Per tal que el sistema compleixi amb tots aquests requeriments, s'ha dissenyat una arquitectura modular. El sistema està dividit en diversos blocs, cada un responsable d'una funcionalitat específica. El bloc principal, anomenat GDAIS-core, es comunica independentment amb cada instrument connectat i guarda les dades que rep. Les dades adquirides són desades en el format binari HDF5, dissenyat especialment per a dades científiques de teledetecció, i que és directament compatible amb un altre format molt utilitzat, el netCDF-4. L'altre bloc principal del sistema es diu GDAIS-control. Aquest s'encarrega de controlar i monitoritzar el bloc GDAIS-core. Per tal de fer accessible aquesta interfície des de qualsevol lloc, s'ha implementat com una aplicació web. A més d'aquests dos mòduls principals, també s'han creat dues aplicacions gràfiques d'escriptori per ajudar amb la configuració del sistema. La primera permet crear un fitxer de text amb la descripció d'un instrument i la segona serveix per crear un fitxer amb la descripció d'una combinació d'instruments a controlar conjuntament. Gràcies al seu disseny modular, el sistema és molt flexible i permet fer modificacions importants a un dels subsistemes sense haver de fer cap canvi a les altres parts. Hi ha moltes aplicacions possibles per aquest sistema, des de controlar un sol instrument a adquirir dades d'una xarxa d'instruments i guardar-ho tot en un sol fitxer. També es pot utilitzar com un convertidor de fitxers, partint d'un fitxer de text o binari d'una captura, per obtenir la mateixa informació en un fitxer en el format HDF5, més optimitzat i ben organitzat.Award-winnin

    Personalization platform for multimodal ubiquitous computing applications

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaWe currently live surrounded by a myriad of computing devices running multiple applications. In general, the user experience on each of those scenarios is not adapted to each user’s specific needs, without personalization and integration across scenarios. Moreover, developers usually do not have the right tools to handle that in a standard and generic way. As such, a personalization platform may provide those tools. This kind of platform should be readily available to be used by any developer. Therefore, it must be developed to be available over the Internet. With the advances in IT infrastructure, it is now possible to develop reliable and scalable services running on abstract and virtualized platforms. Those are some of the advantages of cloud computing, which offers a model of utility computing where customers are able to dynamically allocate the resources they need and are charged accordingly. This work focuses on the creation of a cloud-based personalization platform built on a previously developed generic user modeling framework. It provides user profiling and context-awareness tools to third-party developers. A public display-based application was also developed. It provides useful information to students, teachers and others in a university campus as they are detected by Bluetooth scanning. It uses the personalization platform as the basis to select the most relevant information in each situation, while a mobile application was developed to be used as an input mechanism. A user study was conducted to assess the usefulness of the application and to validate some design choices. The results were mostly positive

    In the Face of Anticipation: Decision Making under Visible Uncertainty as Present in the Safest-with-Sight Problem

    Get PDF
    Pathfinding, as a process of selecting a fixed route, has long been studied in Computer Science and Mathematics. Decision making, as a similar, but intrinsically different, process of determining a control policy, is much less studied. Here, I propose a problem that appears to be of the first class, which would suggest that it is easily solvable with a modern machine, but that would be too easy, it turns out. By allowing a pathfinding to anticipate and respond to information, without setting restrictions on the \structure of this anticipation, selecting the \best step appears to be an intractable problem. After introducing the necessary foundations and stepping through the strangeness of “safest-with-sight, I attempt to develop an method of approximating the success rate associated with each potential decision; the results suggest something fundamental about decision making itself, that information that is collected at a moment that it is not immediately “consumable , i.e. non-incident, is not as necessary to anticipate than the contrary, i.e. incident information. This is significant because (i) it speaks about when the information should be anticipated, a moment in decision-making long before the information is actually collected, and (ii) whenever the model is restricted to only incident anticipation the problem again becomes tractable. When we only anticipate what is most important, solutions become easy to compute, but attempting to anticipate any more than that and solutions may become impossible to find on any realistic machine

    Distributed adaptive e-assessment in a higher education environment

    Get PDF
    The rapid growth of Information Communication Technology (ICT) has promoted the development of paperless assessment. Most of the e-Assessment systems available nowadays, whether as an independent system or as a built-in module of a Virtual Learning Environment (VLE), are fixed-form e-Assessment systems based on the Classical Test Theory (CTT). In the meantime, the development of psychometrics has also proven the potential for e-Assessment systems to benefit from adaptive assessment theories. This research focuses on the applicability of adaptive e-Assessment in daily teaching and attempts to create an extensible web-based framework to accommodate different adaptive assessment strategies for future research. Real-data simulation and Monte Carlo simulation were adopted in the study to examine the performance of adaptive e-Assessment in a real environment and an ideal environment respectively. The proposed framework employs a management service as the core module which manages the connection from distributed test services to coordinate the assessment. The results of this study indicate that adaptive e-Assessment can reduce test length compared to fixed-form e-Assessment, while maintaining the consistency of the psychometric properties of the test. However, for a precise ability measurement, even a simple adaptive assessment model would demand a sizable question bank with ideally over 200 questions on a single latent trait. The requirements of the two categories of stakeholders (pedagogical researchers and educational application developers), as well as the variety and complexity of adaptive models, call for a framework with good accessibility for users, considerable extensibility and flexibility for implementing different assessment models, and the ability to deliver excessive computational power in extreme cases. The designed framework employs a distributed architecture with cross-language support based on the Apache Thrift framework to allow flexible collaboration of users with different programming language skills. The framework also allows different functional components to be deployed distributedly and to collaborate over a networ

    Case Studies on Optimizing Algorithms for GPU Architectures

    Get PDF
    Modern GPUs are complex, massively multi-threaded, and high-performance. Programmers naturally gravitate towards taking advantage of this high performance for achieving faster results. However, in order to do so successfully, programmers must first understand and then master a new set of skills – writing parallel code, using different types of parallelism, adapting to GPU architectural features, and understanding issues that limit performance. In order to ease this learning process and help GPU programmers become productive more quickly, this dissertation introduces three data access skeletons (DASks) – Block, Column, and Row -- and two block access skeletons (BASks) – Block-By-Block and Warp-by-Warp. Each “skeleton” provides a high-performance implementation framework that partitions data arrays into data blocks and then iterates over those blocks. The programmer must still write “body” methods on individual data blocks to solve their specific problem. These skeletons provide efficient machine dependent data access patterns for use on GPUs. DASks group n data elements into m fixed size data blocks. These m data block are then partitioned across p thread blocks using a 1D or 2D layout pattern. The fixed-size data blocks are parameterized using three C++ template parameters – nWork, WarpSize, and nWarps. Generic programming techniques use these three parameters to enable performance experiments on three different types of parallelism – instruction-level parallelism (ILP), data-level parallelism (DLP), and thread-level parallelism (TLP). These different DASks and BASks are introduced using a simple memory I/O (Copy) case study. A nearest neighbor search case study resulted in the development of DASKs and BASks but does not use these skeletons itself. Three additional case studies – Reduce/Scan, Histogram, and Radix Sort -- demonstrate DASks and BASks in action on parallel primitives and also provides more valuable performance lessons.Doctor of Philosoph
    corecore