11 research outputs found

    Project-Team RMoD 2013 Activity Report

    Get PDF
    Activity Report 2013 Project-Team RMOD Analyses and Languages Constructs for Object-Oriented Application Evolutio

    ONJAG, network overlays supporting distributed graph processing

    Get PDF
    The "Big Data" term refers to the exponential growth that is affecting the production of structured and unstructured data. However, due to the size characterising this data, usually deep analyses are required in order to extract its intrinsic value. Several computational models and various techniques have been studied and employed in order to process this data in a distribute manner, i.e. the capabilities of a single machine can not carry out the computation of this data. Today, a significant part of such data is modelled as a graph. Recently, graph processing frameworks orchestrate the execution as a network simulation where vertices and edges correspond to nodes and links, respectively. In this context the thesis exploits the Peer-to-Peer approach. The overlay concept is introduced and ONJAG ("Overlays Not Just A Graph"), a distributed framework, is developed. ONJAG runs over Spark, a distributed Bulk Synchronous Parallel-like data processing framework. Moreover, a well-known problem in graph theory has studied. It is the balanced minimum k-way partitioning problem, which is also called minimum k-way cut. Finally, a novel algorithm to solve the balanced minimum k-way cut is proposed. The proposal exploits the P2P approach and the overlays in order to improve a pre-existent solution

    An Investigation of Cognitive Implications in the Design of Computer Games

    Get PDF
    Computer games have been touted for their ability to engage players in cognitive activities (e.g., decision making, learning, planning, problem solving). By ‘computer game’ we mean any game that uses computational technology as its platform, regardless of the actual hardware or software; games on personal computers, tablets, game consoles, cellphones, or specialized equipment can all be called computer games. However, there remains much uncertainty regarding how to design computer games so that they support, facilitate, and promote the reflective, effortful, and conscious performance of cognitive activities. The goal of this dissertation is to relieve some of this uncertainty, so that the design of such computer games can become more systematic and less ad hoc. By understanding how different components of a computer game influence the resulting cognitive system, we can more consciously and systematically design computer games for the desired cognitive support. This dissertation synthesizes concepts from cognitive science, information science, learning science, human-computer interaction, and game design to create a conceptual design framework. The framework particularly focuses on the design of: gameplay, the player-game joint cognitive system, the interaction that mediates gameplay and the cognitive system, and the components of this interaction. Furthermore, this dissertation also includes a process by which researchers can explore the relationship between components of a computer game and the resulting cognitive system in a consistent, controlled, and precise manner. Using this process, three separate studies were conducted to provide empirical support for different aspects of the framework; these studies investigated how the design of rules, visual interface, and the core mechanic influence the resulting cognitive system. Overall then, the conceptual framework and three empirical studies presented in this dissertation provide designers with a greater understanding of how to systematically design computer games to provide the desired support for any cognitive activity

    Analyse des performances de stockage, en mémoire et sur les périphériques d'entrée/sortie, à partir d'une trace d'exécution

    Get PDF
    Le stockage des donnĂ©es est vital pour l’industrie informatique. Les supports de stockage doivent ĂȘtre rapides et fiables pour rĂ©pondre aux demandes croissantes des entreprises. Les technologies de stockage peuvent ĂȘtre classifiĂ©es en deux catĂ©gories principales : stockage de masse et stockage en mĂ©moire. Le stockage de masse permet de sauvegarder une grande quantitĂ© de donnĂ©es Ă  long terme. Les donnĂ©es sont enregistrĂ©es localement sur des pĂ©riphĂ©riques d’entrĂ©e/sortie, comme les disques durs (HDD) et les Solid-State Drive (SSD), ou en ligne sur des systĂšmes de stockage distribuĂ©. Le stockage en mĂ©moire permet de garder temporairement les donnĂ©es nĂ©cessaires pour les programmes en cours d’exĂ©cution. La mĂ©moire vive est caractĂ©risĂ©e par sa rapiditĂ© d’accĂšs, indispensable pour fournir rapidement les donnĂ©es Ă  l’unitĂ© de calcul du processeur. Les systĂšmes d’exploitation utilisent plusieurs mĂ©canismes pour gĂ©rer les pĂ©riphĂ©riques de stockage, par exemple les ordonnanceurs de disque et les allocateurs de mĂ©moire. Le temps de traitement d’une requĂȘte de stockage est affectĂ© par l’interaction entre plusieurs soussystĂšmes, ce qui complique la tĂąche de dĂ©bogage. Les outils existants, comme les outils d’étalonnage, permettent de donner une vague idĂ©e sur la performance globale du systĂšme, mais ne permettent pas d’identifier prĂ©cisĂ©ment les causes d’une mauvaise performance. L’analyse dynamique par trace d’exĂ©cution est trĂšs utile pour l’étude de performance des systĂšmes. Le traçage permet de collecter des donnĂ©es prĂ©cises sur le fonctionnement du systĂšme, ce qui permet de dĂ©tecter des problĂšmes de performance difficilement identifiables. L’objectif de cette thĂšse est de fournir un outil permettant d’analyser les performances de stockage, en mĂ©moire et sur les pĂ©riphĂ©riques d’entrĂ©e/sortie, en se basant sur les traces d’exĂ©cution. Les dĂ©fis relevĂ©s par cet outil sont : collecter les donnĂ©es nĂ©cessaires Ă  l’analyse depuis le noyau et les programmes en mode utilisateur, limiter le surcoĂ»t du traçage et la taille des traces gĂ©nĂ©rĂ©es, synchroniser les diffĂ©rentes traces, fournir des analyses multiniveau couvrant plusieurs aspects de la performance et enfin proposer des abstractions permettant aux utilisateurs de facilement comprendre les traces.----------ABSTRACT: Data storage is an essential resource for the computer industry. Storage devices must be fast and reliable to meet the growing demands of the data-driven economy. Storage technologies can be classified into two main categories: mass storage and main memory storage. Mass storage can store large amounts of data persistently. Data is saved locally on input/output devices, such as Hard Disk Drives (HDD) and Solid-State Drives (SSD), or remotely on distributed storage systems. Main memory storage temporarily holds the necessary data for running programs. Main memory is characterized by its high access speed, essential to quickly provide data to the Central Processing Unit (CPU). Operating systems use several mechanisms to manage storage devices, such as disk schedulers and memory allocators. The processing time of a storage request is affected by the interaction between several subsystems, which complicates the debugging task. Existing tools, such as benchmarking tools, provide a general idea of the overall system performance, but do not accurately identify the causes of poor performance. Dynamic analysis through execution tracing is a solution for the detailed runtime analysis of storage systems. Tracing collects precise data about the internal behavior of the system, which helps in detecting performance problems that are difficult to identify. The goal of this thesis is to provide a tool to analyze storage performance based on lowlevel trace events. The main challenges addressed by this tool are: collecting the required data using kernel and userspace tracing, limiting the overhead of tracing and the size of the generated traces, synchronizing the traces collected from different sources, providing multi-level analyses covering several aspects of storage performance, and lastly proposing abstractions allowing users to easily understand the traces. We carefully designed and inserted the instrumentation needed for the analyses. The tracepoints provide full visibility into the system and track the lifecycle of storage requests, from creation to processing. The Linux Trace Toolkit Next Generation (LTTng), a free and low-overhead tracer, is used for data collection. This tracer is characterized by its stability, and efficiency with highly parallel applications, thanks to the lock-free synchronization mechanisms used to update the content of the trace buffers. We also contributed to the creation of a patch that allows LTTng to capture the call stacks of userspace events

    Design Space Exploration for MPSoC Architectures

    Get PDF
    Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.Siirretty Doriast

    Individual variability in value-based decision making: behavior, cognition, and functional brain topography

    Get PDF
    Decisions often require weighing the costs and benefits of available prospects. Value-based decision making depends on the coordination of multiple cognitive faculties, making it potentially susceptible to at least two forms of variability. First, there is heterogeneity in brain organization across individuals in areas of association cortex that exhibit decision-related activity. Second, a person’s preferences can fluctuate even for repetitive decision scenarios. Using functional magnetic resonance imaging (fMRI) and behavioral experiments in humans, this project explored how these distinct sources of variability impact choice evaluation, localization of valuation in the brain, and the links between valuation and other cognitive phenomena. Group-level findings suggest that valuation processes share a neural representation with the “default network” (DN) in medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC). Study 1 examined brain network variability in an open dataset of resting-state fMRI (n=100) by quantitatively testing the hypothesis that the spatial layout of the DN is unique to each person. Functional network topography was well-aligned across individuals in PCC, but highly idiosyncratic in mPFC. These results highlighted that the apparent overlap of cognitive functions in these areas should be evaluated within individuals. Study 2 examined variability in the integration of rewards with subjective costs of time and effort. Two computerized behavioral experiments (total n=132) tested how accept-or-reject foraging decisions were influenced by demands for physical effort, cognitive effort, and unfilled delay. The results showed that people’s willingness to incur the three types of costs differed when they experienced a single type of demand, but gradually converged when all three were interleaved. The results could be accounted for by a computational model in which contextual factors altered the perceived cost of temporal delay. Finally, Study 3 asked whether the apparent cortical overlap between valuation effects and the DN persisted after accounting for individual variability in brain topography and behavior. Using fMRI scans designed to evoke valuation and DN-like effects (n=18), we reproduced the idiosyncratic network topography from Study 1, and observed valuation-related effects in individually identified DN regions. Collectively, these findings advance our taxonomic understanding of higher-order cognitive processes, suggesting that seemingly dissimilar valuation and DN-related functions engage overlapping cortical mechanisms

    Security in Distributed, Grid, Mobile, and Pervasive Computing

    Get PDF
    This book addresses the increasing demand to guarantee privacy, integrity, and availability of resources in networks and distributed systems. It first reviews security issues and challenges in content distribution networks, describes key agreement protocols based on the Diffie-Hellman key exchange and key management protocols for complex distributed systems like the Internet, and discusses securing design patterns for distributed systems. The next section focuses on security in mobile computing and wireless networks. After a section on grid computing security, the book presents an overview of security solutions for pervasive healthcare systems and surveys wireless sensor network security

    Architecture and implementation of online communities

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references.by Philip Greenspun.Ph.D
    corecore